In early August, it was announced that UK Research and Innovation (UKRI) would provide significant funding for a new open publishing platform. Called Octopus, this initiative is not yet fully launched, but when it is it plans to “provide a new ‘primary research record’ for recording and appraising research “as it happens’”; UKRI calls Octopus “a ground-breaking global service which could positively disrupt research culture for the better.” I reached out to Octopus’s founder, Dr. Alexandra Freeman, to ask some questions about Octopus and its plans for the future.
What led you to found Octopus – what problem (or problems) does it exist to solve?
When I came back to academia at the end of 2016, after 18 years working in the media, I was really surprised to experience the scientific publishing system ‘from the inside’. Researchers openly talk about ‘what story they are going to tell’ with their data, publication is very slow, word limits mean that full details of work have to be put into supplementary information, hosted elsewhere or simply left out, and a lot of time is spent on tiny details (even down to fonts) of no relevance to the work itself. On top of that, publications seem to be almost the only way in which researchers’ work is being assessed — and often through very poor proxy metrics. Researchers are being put under enormous pressure to get work through the current publishing system, and yet their outputs are then evaluated by the sorts of outcomes that I was used to in the media (how many readers, how many citations, how ‘impactful,’ etc.) rather than measures of scientific quality.
It struck me that journals are trying to fulfill two roles at once in the current system. On the one hand they are disseminating important findings to their readers, particularly to professionals who may be able to implement those findings in practice. On the other hand they have the somewhat thankless task of being the primary research record, where researchers publish all their work in all detail to a minority readership. These two roles pull in opposite directions: an article that gives full details, all the twists and turns and blind alleys of the real research process, does not make a nice, easily-digested narrative for those who mainly want to know the findings and implications; an article that supplies the ‘edited highlights’ does not give the full details that other researchers need to learn from. Journal articles are best suited to the dissemination role — the “news and views” and editorialized, narrative-driven pieces. What I felt was missing from the landscape was a platform designed exclusively to be the primary research record — to serve the need for publication of information fast, and in full detail, and open to full scrutiny.
Octopus, then, sets out to be a sort of global “patent office” for science. It’s where researchers can publish their work in full detail, and instantly, claiming priority and ensuring that it is described in such a way that another researcher knows exactly what they meant and what they did. It won’t be a good read, but it will allow researchers to display the quality of their science and not their storytelling. And, importantly, it allows us to set the incentive structure on this platform to match what we want to encourage – what is useful for science.
What do you see as the most important differences between your publishing model and a more traditional journal platform?
There are some features of Octopus that are shared with other platforms, but in combination I think they are unique. I’d say some key ones are:
- Instant publication (once all authors agree), ensuring that work is shared quickly and efficiently, with no gatekeepers to the research record.
- Open post-publication peer review, with the option for authors to reversion their work at any time (old versions being available still)
- Breaking down of the traditional ‘paper’ format into smaller publications, allowing review at every stage, smaller author groups, faster sharing, and removing the need for a narrative which relieves the pressure that creates questionable research practices.
- Treating peer reviews like any other publication, which should both incentivize them and the skills that good reviewing needs. Treating constructive critique as a valued scientific skill should help foster a more collaborative approach.
- A rating system based on predefined criteria, set for each different publication type (including reviews). This allows us, as the scientific community, to define what “good” means to us in each case. Of course any metric is a poor relation to “reading and making one’s own mind up,” but we do need metrics, and in Octopus these will be designed to be as close a proxy for quality as we can make them.
- Minimizing cues that can cause unconscious bias (such as using initials rather than first names for authors on publications, no uploading of profile pictures, etc.).
- Every publication needs to be linked to an existing publication, to form branching chains. This is, I think, unique. It means that you cannot publish Data without linking that data to a published Protocol/Method; you can’t publish a Method without linking it to a published Hypothesis/Rationale etc. Only Reviews and Problems can be linked to any other publication type. This is designed not only to encourage good practice in terms of ensuring that each part of the research process is explicitly defined and shared, but also to help discoverability: imagine being able to find every publication associated with a particular research question, instantly.
- Publications can also be “horizontally linked” — a publication that is in one research chain, associated with one research question, might be equally useful in another (e.g., an algorithm that was devised in the context of liquid flow in physics may be equally useful in explaining swarming behavior in biology or crowd behavior in psychology, etc.). Any researcher can add a horizontal link like this, and the links can be rated by others, and the researchers gain credit for the links they have made. This can encourage cross-disciplinary thinking and again increase the collaborative approach.
And one feature that I don’t think will be ready for first launch but which I’m passionate about, is making the whole platform language-agnostic. I want to reduce barriers to taking part in scientific research, and one of those that we can now just about remove is language. The best automatic translation systems are now good enough to allow people to choose the language in which they read and write to the platform — others will be able to read and write in other languages, seamlessly. This should be just about feasible now, and is only going to improve over the coming years, so I want to get that feature working as soon as possible.
Information about copyright on the FAQ page is a bit confusing. Does Octopus require contributors to use a CC license, and if so, does it require a specific one?
Contributors can choose which license they want to apply to their work. I had imagined that the Creative Commons licenses would offer enough scope amongst them for all the possible restrictions authors might want to add to their work. If Octopus needs to offer other license types then I’m happy for that to happen — it should be the author’s choice what license they apply to their work.
Also on the FAQ page, the answer to the question “Can I publish in both Octopus and a journal?” is a bit less than clear. To clarify: will Octopus allow an author to publish, say, an article both there and in a traditional journal?
Of course people can publish elsewhere as well. I suspect in the short term Octopus will act a bit like a preprint server in that respect. Authors will hopefully want to publish quickly in Octopus to ensure they get the credit for their work as soon as possible, protecting their “priority” over it, but then write it up for a traditional journal at some point. In the longer term I think that second stage will modify, so that journals will not be taking primary research papers and organizing peer review, but instead be commissioning reviews, syntheses, editorials, and short “news” articles announcing findings — so authors will instead be pitching those once they have published in Octopus.
Some people are concerned that there are more and more platforms appearing and researchers are being made to jump through more and more hoops. I feel that pain! Octopus is not trying to reinvent everything and add to work—it’s trying to help bring all these places together and to make things easier and faster for researchers. The individual author pages in Octopus will be automatically generated from information collated by ORCiD and through Octopus itself — you will not be able to add to it yourself (this is also partly to minimize those cues allowing unconscious bias, and to avoid cherry-picking of outputs). Octopus will not create restrictions on format of publications — you decide how you want your research to look (no more font or spacing or heading requirements). There will be some metadata entry but this will be minimized (no more ‘enter it here exactly as it is in your manuscript’!). If you use existing platforms, Octopus will try to integrate and work with those so that you don’t have to repeat your work. We’re trying to make researchers’ lives easier: to find the work of others, to assess it, to build on it, and to share your own work.
Is Octopus equipped to host large data sets?
No, Octopus isn’t a data repository. There are many specialist data repositories for different fields, and authors are asked to give a DOI or other unique identifier for data that is deposited elsewhere. The same is true of specialist repositories for protocols, code, or videos—you just link out to them from Octopus.
UKRI has committed £650,000 over three years to support Octopus. What more will be needed to make Octopus sustainable in the long term?
Octopus’ aim is to minimize its costs as much as possible. UKRI has given us exactly what we asked for in terms of finances — enough to build from the current prototype to a launch platform. In the long term, I want to keep Octopus’ running costs as low as is feasible so that it can be maintained through crowdfunding, philanthropy or by a traditional funder. Failing that, the annual running costs would be minimal if spread across academic institutions — probably the cost of a single journal title each.
How do you see the work of Octopus dovetailing with that of the Open Science Framework (OSF)?
I personally use OSF, and other platforms, at the moment. Where data, protocols, analytical code or pre-registrations are hosted on these sorts of platforms then authors will simply link out to it from Octopus. We will try to integrate as much as possible, and allow others to integrate with Octopus.
The website indicates that “anyone logged into Octopus can rate publications, review them or red flag them if they have serious concerns.” Does Octopus take steps to flag or screen out incompetent or even malicious reviewers, or is that function left to the community at large?
Octopus will not be doing any moderation of the platform. Everything that a registered user does will be visible from their personal profile page — the page that institutions and funders will look at. If someone’s publications (which include their reviews) are poorly rated or red flagged by others, this will be visible on that individual profile page. An individual’s ratings of others will also be visible — allowing persistently overly-generous or overly-negative raters to be easily identified as well. Full transparency of this kind of information will, I hope, make it relatively easy for poor behavior to be spotted.
Since it’s open source, Octopus will allow developers to create all sorts of plug-in modules to visualize the metadata created within the platform, looking at researcher behavior, etc.
The FAQ document explains that when faced with serious objections or concerns with their work, an author can “retract” by re-versioning their submission. But in the event of a dispute that isn’t resolved amicably, the author and the complainant are referred “to the authors’ Institutional Integrity Office, or their national office.” Does Octopus have dedicated staff for the purpose of managing and referring these disputes?
No, this will all be automated — and you’ve spotted one of the problems we’ll be working on over the coming months. With ‘red flags,’ when a reader raises one, the first thing that happens is that a flag will appear on the publication and other readers will be able to read the concerns expressed. These concerns will also be emailed to the authors, who will be able to respond. If the initial flagger is satisfied with a response (which could be an explanation or could involve a reversioning of the publication), they can remove the flag. If the dispute is not settled after a certain length of time, or the flagger decides to escalate it, the correspondence will automatically be forwarded to the relevant research integrity office. We are building a database of these to allow this automatic escalation, and of course dealing with it internationally is a bit of an issue! Of course if the authors have left academia, are dead, or otherwise uncontactable, then that red flag may remain indefinitely on a publication — but it is only a warning, and other readers can read the concerns and make up their own mind.
As I say, this feature is one of the things we’ll be working on, alongside research integrity offices around the world.
Right now there is only a handful of items in Octopus, virtually all of them contributed by you. It’s early days of course, but do you have specific goals for ramping up content acquisition, or will you just let the content come organically and see what happens?
Octopus doesn’t exist at the moment — what you’re looking at (science-octopus.org) is a prototype working platform that helps demonstrate the functionality and which we use in workshops and user-testing to get feedback from users. We plan to launch in spring 2022.
What we’re working on at the moment, prior to launch, is “seeding” the Octopus database with existing content. Because every new publication needs to be linked to an existing one, we need to have a good framework of existing publications for researchers to attach their new work too. We’re taking Jisc’s CORE database of Open Access publications and using natural language processing to extract ‘Problems’ (research questions), and hierarchically linking those to each other. This is an interesting project in itself, but it means that when we launch we will hopefully have a good number of existing Problems, and those will link out to existing old-style papers in the CORE. Researchers won’t be able to add new “papers” in Octopus, but they will be able to add reviews and ratings in Octopus of those original “seed” papers. That means that from day one of Octopus it will hopefully already be a very valuable place to search and find publications, and read the ratings and reviews of others. From then on, I hope it will snowball — people will probably initially want to publish work that they wouldn’t have other outlets for (hypotheses without data, small data sets, negative results, etc.) as well as reviews, but also I think that they will want to ensure that they establish priority on work. I can foresee a bit of a “gold rush” of that!
By making Octopus easy to use for researchers, and useful for institutions and funders looking for metrics or well-reviewed protocols to fund, I hope that there will be a virtuous circle. People always say culture change is slow and hard, but I think by making something that makes everyone’s lives easier and having an eye to the incentives, it can actually happen very quickly indeed.
Discussion
17 Thoughts on "“Positively Disrupt(ing) Research Culture for the Better”: An Interview with Alexandra Freeman of Octopus"
This will definitely fail. It’s trying to do the job of preprints, data repositories, protocols.io, and even Crossref (linking between research objects) with 650k in funding and no integration into career incentive structures?
OK. Good luck with that.
Octopus is designed to integrate with many of the systems you mentioned: to link together with data repositories, protocols.io, Crossref and many others. It is a sort of replacement for preprint servers, admittedly – but with a built-in facility for reviewing.
The funding is exactly what we asked for and need for the next step. The technology needed for the platform itself is not difficult, or expensive. It’s the culture change that’s more difficult – and that doesn’t need money so much as thought, time and talking. It’s one of the many reasons I’m so glad to have the support of UKRI, UKRN and Jisc and partners: they don’t just bring resources, they open doors to have those important conversations about changing cultures and processes.
Octopus aims to provide metrics that are closer to an assessment of ‘good quality scientific work’. If it does, then why wouldn’t institutions use it for their promotions and hiring processes?
If it can also provide a free, fast, useful and easy-to-use way for researchers to share their work then that should create a virtuous circle.
Culture change can be difficult, definitely, but the first step is to provide the alternative, and to ensure that it’s as good and useful as it can be. So that’s what I’m trying to do with Octopus.
“Octopus will not be doing any moderation of the platform”.
Nothing more needs to be said… No moderation, no QC, life is too short to waste time sorting through mountains of garbage to try to find some nuggests of useful information.
An unanswered question reflecting the title … by what measures will it be judged if this is better?
I’m not speaking for Alexandra here, just expressing my own view: it seems to me that while many who support innovations and platforms like this one see them as tools for dismantling (or at least undermining) traditional economic models of journal publishing, a platform like this really represents a radically free-market approach. Funding aside–let’s assume for the moment that it gets secure, reliable ongoing financial support–Octopus will stand or fall on its ability to attract participants. It will be judged “better” or “worse” by (forgive me here) the Invisible Hand of the Marketplace. If it solves problems for authors, they’ll use it and it will thrive; if it doesn’t, they won’t and it will either evolve or wither away. (There is, of course, a command-economy scenario that could come into play, too: participants in the scholcomm ecosystem who have power over authors could decide to exert that power to force authors to use it. But I don’t think that’s very likely in this case.)
Rick: “Does Octopus have dedicated staff for the purpose of managing and referring [publication] disputes?
Alexandra: “No, this will all be automated”
To me, this seems even less regulated than Facebook. Disappointing, unless this is what people think when they talk “disruption.”
Managing a dispute over a publication needs to be dealt with by the relevant research integrity offices – whether that’s at an author’s institution, or their national office or at another location such as a funder: people who have the authority to impose relevant sanctions. Octopus, however, can create a clear pipeline for concerns to these authorities, and make them publicly available to read by others alongside the publication.
I’m aware that Research Integrity Offices are not at all comprehensive, worldwide, and that does raise issues – but those are issues that stand whether or not Octopus exists or not. My hope and expectation is that there will soon be a relevant RIO for every author – even if it is one at a national level. That seems to be the way things are moving.
So the “automated” and self-policing parts of monitoring and addressing scientific misconduct are shockingly concerning in multiple ways. Either the product developers didn’t bother to learn what happens with ethics complaints in real life, or they think the research community is made up of magical beings that always behave in ways that are not self serving.
I wondered about this when I made my account on the demo site. Both the investigation process … but also primed by cybersecurity training I took recently I started thinking about how a bad actor could use this as a vehicle for harassment. Imagine a researcher coming back from their summer holiday to find that their work has been flooded with red flags. Accusations posted automatically without moderation and then the accused has to convince the accuser to remove the flags?
I’m not under any illusions about the need to consider those with bad intentions. I’m also aware of some the ways in which misconduct investigations are conducted – and they are immensely variable.
As I said in the interview, this is a section of the platform that we’re actively working on still, and will be talking to a lot of RIOs. At the moment there is not a consistent and robust pipeline and process for raising concerns about any kind of misconduct or serious issues in a publication. We’re hoping to help feed into one – but Octopus can only ever hope to be a small part of that: a part, I think, whereby at least readers of a publication can be alerted to potential concerns about it and read those concerns alongside the publication. There needs to be a robust and manned process by which such concerns are then investigated by the relevant authorities. As I said, those may be an institutional RIO, or a national one, or a funder-related one etc – just as it is at the moment. All Octopus is doing is helping create a smooth way to raise a concern and for others to be aware that a concern is raised. Yes, we need to consider at what point a red flag is raised, who can lower it etc, and that is the sort of thing that we will be working on. Very happy to continue to have conversations about this aspect!
The very low barrier for entry and publication, coupled with a laissez-faire community model of oversight makes me wonder whether the only groups that will find a home with Octopus will be disenfranchised scientists, lobbyists, conspiracists, and a sundry of pseudoscience groups (antivaxers, alien abductionists, climate change denialists, etc.) who need to find a community home they can promote as “legitimate science.”
Yes, it is a legitimate concern: in order to work Octopus needs a critical mass of usage and community in order to have functional self-regulation. If there are enough people using it who are reading, rating, reviewing as well as publishing then poorly (or not at all) rated and reviewed content should sink to the bottom. If not, then of course there is not enough sorting going on.
The low barrier to entry can be abused. I think it’s up to us as a scientific community to decide that we want to make this work, because we value the need for low barriers to entry for researchers, and that we will therefore take part in helping ensure that the good actors outweigh and can help spot bad actors.
I wouldn’t have believed such a system could work myself until I had seen it in action in other places online.
Alex, can you explain what you mean by “sink to the bottom” in this context? Will poorly-rated content get sorted in some way that would affect, for example, the ordering of search results?
Yes – you will be able to filter your searches by ratings (maybe you want to filter out all those that have not been rated or reviewed, or maybe you want to filter out all those with a rating below a particular threshold), and similarly with links to related publications – you will (I hope!) be able to order those by rating should you wish.
New business models should proceed. Of concern is a lack of monitoring and ethics oversight. We are currently on a slippery slope with Machine Learning (ML) algorithms and their BOTs. Beyond ML true Artificial Intelligence (AI) is positioning.Creating a set of bylaws to be reviewed on a regular basis and that are enforced should make for better business as both good and bad actors enter the field. What governing body will enforce these rules in and around the geowalls? There are many things to consider, so I wish you luck with your next venture.
It doesn’t sound like a problem to me as long is it going to help researchers to publish its very interesting can’t wait for it to launched
Most of the comments to date focus on moderation, monitoring, research integrity and so on. I’d like to raise something very different and that is about the disciplinary focus and whether or not the approach outlined here would resonate and make sense for the humanities and, indeed, perhaps much of the social sciences. I was struck by Alex Freeman’s comment about the typical piece they’ll be publishing: ‘It won’t be a good read, but it will allow researchers to display the quality of their science and not their storytelling.’ It is not in any way a criticism of the disciplines that are being targeted by Octopus when I say that the quality and shaping of the research publication (the storytelling if you like) is a fundamental part of how most humanities research develops, how ideas and analysis are shaped, as well as how it is communicated. They’re not separated in the linear fashion implied by Alex’s comment, but intimately bound together within the research. It is why articles are longer, why books are so important, and above all why writing is for humanities scholars part of the research process not just about the communication of results. I cannot judge whether Octopus, as currently conceived, will work well for STEM disciplines, but it seems to me that it is unlikely to work for the humanities.