In a talk I gave at Frankfurt in fall 2015, I led publishing executives on a painstaking tour of my poor experience using the digital services provided by scholarly publishers and academic libraries. My goal was to provide a wake-up call that would establish the need to collaborate in more strategic and user-centered fashions. Now, five years later, I want to review some areas where publishers continue to fall short.

There is no question that major publishers are talking more about users and indeed have made some real progress. Judy Verses expressed the idea several years ago in urging publishers to treat researchers as their “North Star.” Kumsal Bayazit has spoken of the need to reduce the many frictions that researchers face in their work. The STM Association’s most recent Top Tech Trends forecasts recommends an increasing focus on the user. This is good progress to be sure. Many startups have the luxury of a blank slate, allowing them to build an architecture that truly centers on the researcher, but for incumbent organizations taking action is no small thing. Given platform fragmentation, efforts to implement the standards necessary to improve user experience is a significant enough undertaking. But truly centering on the researcher requires far more profound change, not just at the level of user experience but in terms of rethinking existing businesses and organizational models.

Winslow Homer, “Shooting the Rapids, Saguenay River,” 1905–10, The Metropolitan Museum of Art.

Access and Discovery 

At the most basic level, the researcher wishes to discover and access all scholarly content that might be relevant to their research. 

This statement reads like a bland truism at first, but let’s be clear about the implications. No user ever wants to end up on a publisher-specific content delivery website. Instead, they want immediate access to everything, all at once, probably ideally through one single interface. Let that sink in for a moment: The very existence of publisher-specific websites is misaligned with researcher needs. Most of the efforts to address user experience in recent years have sought to help the researcher get to a publisher-specific website, ameliorating this fundamentally maladapted architecture, rather than fix the underlying problem.

While my focus here is on the publisher, I would be remiss not to point out that a parallel set of failures exists for libraries, which never provide immediate access to all the content their researchers might seek. It has been clear for nearly a decade that the vast majority of discovery no longer happens through systems and interfaces provided by the academic library. To be sure, efforts to re-architect the integrated library system into a platform that can provide the kind of discovery that researchers require have resulted in improvements, bringing a greater portion of e-resources into the discovery index. Even so, institutionally focused approaches have stifled innovation. Instead, the vast majority of users have chosen to utilize globally seamless alternatives such as Google Scholar, even notwithstanding their own imperfections.

It would be no small matter to re-architect discovery and content delivery systems away from individual publishers and libraries and instead around the actual needs of their users. And to be sure, this is not just a technical problem. After all, since the vast majority of scholarly communication costs are paid on an institutional basis, and to individual publishers, there would seem to be little incentive to re-architect systems to address individualized user needs. As a consequence of this mismatch, incumbents have faced far too little direct incentive to modernize their approaches, and innovation comes as external disruption from those entities building business models that are not institutionally based.

Manuscript Submission and Review

Another area that profoundly lacks for user-centricity is manuscript submission and review. While the systems involved in these processes could surely afford to be modernized in a variety of ways, the limitations are not so much in the user experience as in the basic assumptions of how these systems are structured and utilized. 

An author would like to submit all articles through a single system, tracking their status using a single dashboard, regardless of the journal to which they are submitting or which publisher happens to own it. And that author would like to have a single set of requirements for submissions, for example with a single reference format and a single set of data obligations for all submissions they might make. The author might even like to have something like a single “common application” for their submission, covering an array of journals in their field across various publishers, through which they might like to cascade their article through editorial review should it not be accepted by their first choice. 

A reviewer would like a single dashboard where they can see all requests and outstanding obligations for peer review, manage the timelines collectively, and act on them through a single interface. Ideally, this dashboard would allow a reviewer to delegate a draft of the review to a graduate student, postdoc, or other proxy directly, rather than the workaround methods we know so many adopt. 

While not all authors or reviewers have multiple manuscripts active in their workflow at any one time, active researchers and reviewers face a real management challenge which the most productive ones surely find overwhelming. For authors in an open access environment, this approach imposes unnecessary costs on publishers’ “best customers.” And for all participants, the critical issue is the need for dozens of logins, one for each journal, which is confusing because of misplaced passwords and annoying because of data reentry. Managing multiple logins and tracking activity across multiple journal workflows is an enormous waste of time and cognitive effort for researchers. 

To be fair, manuscript submission and management systems are designed with publishers as the target customers and are geared to their strategic and operating priorities, which include data ownership, peer reviewer management, and the complex web of journal ownership. And a common application model stands in direct opposition to publisher-managed cascades. Still, let us be clear: the end result is an author and review experience that can in no way be framed as “user centric.” 

Looking Ahead

Notwithstanding the platform investments that they have made in recent years, incumbent leaders in scientific publishing have yet to provide anything close to a user-centric architecture. 

I fully recognize that solutions to some of the challenges I have emphasized are misaligned with existing organizational incentives and therefore do not make near term financial sense. To put it bluntly, some incumbents may find a more deeply user-centric approach incompatible with elements of their current business. Others may find it incompatible with the way that responsibilities are structured within their organizations. These are challenging problems and solving them may not always generate a financial return, even in the long run. 

But I hope that some will read this piece differently: as a roadmap for several key avenues of disruption to legacy assumptions. In my view, this all is an indication of just how much opportunity space there may yet be in the sector. If incumbents cannot address some of these most yawning gaps, they may find others prepared to do so for them. 

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.


45 Thoughts on "Publishers Still Don’t Prioritize Researchers"

I will put on my Joe Esposito hat to respond here — if these things were valuable to researchers and their institutions, they’d be willing to pay for them and the market would provide them. So far, it has failed to do so, which perhaps should ask the question of whether they really matter enough to invest in.

The common application process will not work for journal submissions. A toxicology journal has very different needs from authors than does a history journal. Submitting to a journal with an open data requirement or one that performs open peer review is going to be very different from one that makes lesser asks from authors. It is often said that the reason that submission systems are so bad is that it is fairly easy to build a good, bespoke system for one journal, but when you try to add in a second journal, because each journal has its own idiosyncratic workflow, things start to get messy. Add a third journal and you have the hairballs that are the submission systems used by most publishers. Elsevier learned this lesson when they spent years building eVise, and then had to buy Aries to make up for it. PLOS learned this lesson as well, in trying to build a submission system:

Hi David, It’s clearly been a real challenge to build robust scaled up submissions and editorial management systems. Maybe this is because so much energy has been devoted to the idiosyncracies of individual editors and societies? I have seen many cases where divergent processes that were just fine in a print and paper era (even if they added little or no value) are enormous impediments to developing sensible digital workflows and associated platforms. In these cases, the challenges are not technical but rather an inability or unwillingness to fix outdated processes and/or align them to drive efficiencies. As I noted in the piece, “some incumbents may find a more deeply user-centric approach incompatible with elements of their current business.”

Yes, as someone who builds these systems I can confirm these are the biggest problems. Some are driven by editorial resistance to change. But many are important differences in the scholarly community. And some are driven by the ongoing evolution of the landscape (continuous publishing, non-traditional formats, preprint integrations, etc). There really is not such a thing as a common publishing process as much as I wish there was! :_(

It’s a good question — I think that many of those editorial “idiosyncracies” are what differentiate one journal from another, and create brand identity. It’s why an author would submit to Development rather than Developmental Biology for example. And brand identity still resonates with researchers who want to get their papers out and seen by their communities.

I agree that submissions should be standardized as much as possible and many journals already allow for bare minimum formatting at first submission. However, what you are proposing is one system to rule them all— for submissions and delivery platforms. How is creating a monopoly in anyone’s best interest? Even a federal response (which if we have learned anything in the last four years is a terrible idea) would only be for one country and federal funding. One platform is absolutely not the right idea.

Standardizing content, putting content where people need it, and building out access validation across content homes is the solution and we are getting close with Seamless Access and GetFTR.

I’m definitely not proposing one system to rule them all. I’m pointing out that the current jumble is not user-centric. There are definitely an array of models to address that dynamic without creating a single monopoly in either case.

I know this piece is directed at the large publishing incumbents, but I’d like to point out that this is an area where plain-old, just-freely-available open access has a huge competitive advantage. The bewildering login and institutional access flows of publisher sites make it extraordinarily difficult to find and access pay-walled material outside of a university-provided computer. And when big incumbents acquire open distribution platforms, they often hide download links behind dark patterns intended to drive signups so that open platforms can be used to build proprietary user datasets (eg – SSRN).

On the vast majority of the 10,000+ journals run on Open Journal Systems, you’ll find a prominent download link without all of the clutter and distraction of incumbent publisher sites (disclosure: I work for PKP). The same is true on other OA platforms like PLOS, eLife or Janeway, because publishers that are committed to the principles of OA (not just the opportunity to exploit an APC model for profits) have fewer incentives to capture and exploit user activity data.

This doesn’t address all of the UX problems that arise, because at the end of the the day publishers and editors — not authors and readers — are still our “customers”. It is difficult to fund work on UX improvements that target researchers. But when research is openly available under the principles and aims of the open web, the problems related to distribution and discovery become much easier to address, because the largest barrier to centralized discovery services right now is the unavailability of published work. In a world of open access, Google, bing and DuckDuckGo are more than sufficient to quickly access cited work (law journals, which are mostly open access and have well-designed sites, are good examples of this). Material that is indexed through services like DOAJ and Crossref can be a single hop or less from discovery services (for example, Open Journal Systems publish a meta tag that allows PDFs to be downloaded directly from the Google Scholar UI, without an extra jump to the article landing page on the publisher’s site).

On the submission and review side, there are some exciting projects being to integrate preprint communities in the process. eLife is asking that their submissions have been reviewed as preprints, which would streamline some of the delays in moving submissions from journal to journal after lengthy review processes. With our newer platform, Open Preprint Systems, we’re looking to build integrations that permit one-click submissions of preprints to journals. These benefits are available to non-OA platforms, but because the OA movement is predisposed to open data formats, the technology is ready-built to bring platforms together in these sorts of ways (for example, OPS provides data about posted preprints in machine-readable formats like OAI and a REST API; OJS already supports journal-to-journal transfers).

> And that author would like to have a single set of requirements for submissions, for example with a single reference format and a single set of data obligations for all submissions they might make. The author might even like to have something like a single “common application” for their submission, covering an array of journals in their field across various publishers, through which they might like to cascade their article through editorial review should it not be accepted by their first choice.

This is where there are the most efficiencies to be gained in the entire scholarly publishing chain. Unfortunately, it is also the area that will be most resistant to change because it is not a problem that can be solved through technology development. That is because the form in which researchers write (Word) and the form in which publishers are expected to distribute (JATS) are fundamentally incompatible.

For decades the publishing community has poured money into machine learning tools to automatically convert from Word documents into JATS, without sufficient success. That’s because these formats serve fundamentally different purposes. Word (.docx) is a document format that is intended to prepare physical specimens (pages) to be read by humans without visual impairments. It therefore provides authors with all of the tools necessary to layout and style their work. JATS is a content description format intended to describe information for machines. Despite their best efforts, the diversity of languages, styles and norms in the scholarly publishing community means that Word documents produced by authors can never be reliably converted to JATS.

That doesn’t mean that conversion tools are useless: they can still reduce the typesetting labour required to create JATS by 80-90%. But they are insufficient to solve the problem of “write once, submit anywhere”.

In my view, there are only two ways to solve this problem. First, authors could be forced to adopt a new tool specifically for writing scholarly work, such as Fidus Writer or This is a social problem of such enormous scale it is probably impossible. But widespread rules by publishers which enforced such formats for submissions could, in theory, move the needle in certain communities. Second, publishers (and indexers) could abandon the dream of machine-readable, full-text published items and make-do with the the information that can be scraped from a publication’s landing page and galleys (PDFs, ePubs, etc). This would be vociferously fought, but I believe it is possible (even advantageous) to build extraordinary discovery tools based on the open web. We would lose detailed specifications of enumerated metadata, such as keywords and subjects. But in my view, these are not fit for purpose in the global, multilingual and incredibly diverse scholarly publishing infrastructure. The biggest real loss from abandoning the dream of JATS would be rich citation metadata, but this could be restored during the submission or typesetting phases without too much manual effort.

I don’t suspect that either of these options (abandoning Word or JATS) will ever happen. But a burdened infrastructure maintainer can always dream!

I have had trouble, at different times, getting academics to use track changes or adopt a citation manager (some people just want to style 100 references by hand).

Software that costs extra or is only available on linux is an automatic no-go. And all of that to work in JATS, a format most academics probably don’t even know exists.

I have long thought there is something fundamentally questionable about the amount of effort that we spend laboriously taking a document that was written in one electronic format (usually Word) and then converting it into a second (some species of XML) largely for the purpose of simply further converting it into HTML and/or PDF. I don’t think it’s a coincidence that these were technologies with a lot of currency at the time the industry first digitised around the turn of the millennium, and I find it difficult to believe it is the answer anyone would arrive at if they started today. Unfortunately it’s now become embedded as custom and practice to a scholarly world which places a very high value on established cultural norms.

> However, what you are proposing is one system to rule them all

A single dashboard is a challenge. But federated systems are possible. These are “federal” in the U.S. sense of the term (ie – national), but federated in the sense that they are run as distinct technical systems that can communicate with each other. (This is, for example, how the internet runs.)

If a submission were made automatically from one place (ie – an author using Fidus Writer), it would not be difficult for the submission and review system to communicate back that place a change in the status of that submission. In this way, the depositing system (the author) can be kept up-to-date about changes from that system.

The challenge is again socio-political: who is the “depositing system”? In order to receive updates, they need to have a server that can receive changes, so a fully local system isn’t possible. But I think that this could be surmounted in large part by asking universities to be the depositing system. They already do the hard work of maintaining identities for the scholars in their system, and ORCID provides a mechanism for sharing that identity throughout the duration of a career.

For scholars not attached to an institution, such a federated system can degrade nicely. The scholar loses access to the automated submission and tracking service, but can still make manual submissions.

Federated systems are not really harder to build than integrated systems. But we software developers have less experience building them because the web that was built from 2000-2020 was defined by Silicon Valley investments aimed to capture user activity rather than federated applications intended to empower users.

While I agree that we need a new paradigm in terms of delivery, and with all due respect, this reads as if it was clearly written by someone working on the library side rather than on the publisher and editorial side.

Yes, it would be a wonderful world if we all had one portal to submit papers into and provide peer-review through and one outlet from which the researchers could obtain relevant publications. But many publishers, even the big ones that we all love to suggest make too much money, are struggling to survive in a world in which a small group of funders have decided to set the rules of publication for everyone. Add in that this is not, in fact, a bunch of people singing songs around a camp fire, but rather competitors in an open market, and the inevitable response is “you first”.

And for that the solution is simple. Get all the libraries to pool all of their funding to buy open subscriptions to all of the journals and then put them on a single platform that anyone can use for free. Once you’ve proven that this can work without killing off the libraries, maybe the publishers will look at building a joint submission system.

Or, alternatively, hire some coders, sit down with some people who actually work on the peer-review and production teams, and build that perfect, single entry, universal submission and peer-review system that will meet everyone’s needs, with single-sign-on across all journals, cascading submissions, and uniform file and formatting requirements, and then license it cheap, so we can afford it. It truly sounds wonderful. But I honestly don’t have the time to envision it or to design it or to work the bugs out of it. I’ve got journals to put out. Today, tomorrow, next week, next month, next year. Because it never stops.

Someone has to make the first move.

Im not sure researchers’ own institutions “prioritize researchers” either. For example, the way OA budgets for RAP/PAR or transformational arrangements are managed in individual institutions can be quite idiosyncratic, even between the different faculties.

Indeed. In my paragraph about the limitations of the library’s approach I tried to get at some relevant dynamics.

I would like to comment specifically on your Access and Discovery paragraph. At Researcher we take all the hard work out of keeping up to date with the latest research articles. We put discovery in the Researchers pockets, we aggregate over 17,000 content sources, including journals, company research, pre-prints and conference proceedings. Nearly 2 million researchers use the app and they are generating over 15,000,000 impressions per month. You can see how it works here.

Roger – I remember the impact of your talk in Frankfurt! Can you speculate further about this: “critical issue is the need for dozens of logins, one for each journal, which is confusing because of misplaced passwords and annoying because of data reentry
”. Given that almost all submission and peer review systems provide the option of federated single log-in and registration using ORCID authentication, why is this still an issue? Is it because individual journals have not activated the functionality or is it because researchers don’t want to use ORCID?

Richard Wynne – Rescognito

I don’t have a good sense of how many authors and reviewers are logging in through ORCID auth. Having a separate registration/account not just for each publisher but also for each journal is an issue in any case. There is no place for an author to see all their submitted papers across journals, and there is no place for a reviewer to see all their outstanding review requests.

Given that a single sign-on solution is available, why would researchers choose not to use it?

The separate but related issue of aggregated views of submitted papers and reviewer assignments, is better solved though PIDs and APIs than by developing one big system. As I understand it, platforms such as Research Square Preprints and Wiley’s Under Review already use peer review System APIs to display peer review data in their interfaces.

It’s single sign on to an unlimited number of SEPARATE accounts. Not single sign on to a single account/dashboard. Does that difference make sense? I’m not suggesting that researchers don’t use it, just saying that it solves at best one part of the problem.

I absolutely agree that there are other approaches, such as APIs, that will be better solutions than “one big system.”

I’d caution against interchangable use of researcher-centric and user-centric. Researchers may be users but not all users are researchers. It seems to me in many cases the systems are not researcher-centric bc they are user-centered … but for a different group of users.

I think this is a hugely important idea Lisa, thanks for raising. Is a submission/peer review system built for authors or is it for the managing editor handling the papers or is it for the peer reviewers or for the editorial board members or for the editor in chief or for the production staff who will have to handle its outputs or for the bibliometric analysts who want to check journal performance (and I’m probably missing a lot of others as well)?

I’ve done a lot of society journal management over the last decade, and much of my time has been spent trying to find the right balances between the society’s vision of the journal, what the editorial office wants/needs, and what the publisher who maintains all the systems used requires. Usually the society/editors are looking for significant levels of customization and uniqueness, while the publisher is hoping for standardization, as running a large platform is a lot easier if all participants are identical. Given these competing agendas, the end results are likely going to be something of a compromise, and not perfect for any one stakeholder, but largely “good enough” for all.

Yes, it as if I have my user hat on until I finally find what I might be interested in. And then if that process doesn’t drive me nuts and I might remember what was was after I can switch to researcher mode.

I’ve used the term user-centric as a shorthand for “end-user-centric.” It’s possible that some of the systems I mentioned are designed around the needs of those other user communities that David mentions. It’s also instead possible that some systems are not really designed around the needs of any particular user — more like a hodgepodge of requirements from various parties. Either way, these systems are not designed around the needs of the researcher end-user!

Who — or what institutions — could speak on behalf of researchers? As system developers, we get it in the teeth when a publishing system doesn’t meet the expectations of authors, editors, publishers, societies, or funders. These interests drive all of the new demands (and funding) for feature development: ORCID, CRediT, ROR, JATS, etc.
Interestingly, legislation on accessibility and privacy has been one of the few examples where “end-user-centric” incentives exist, but they only achieve this by relocating incentives (in the form of liabilities) onto owners. Things like open access mandates and other funding restrictions, however flawed or ineffective, should be seen in the same light.
But researchers, as far as I’m aware, don’t have any collective voice that could generate incentives to invest in their interests. I really wish they did, but they don’t…

I think this is a very important observation. It’s why I keep writing and speaking about these issues.

Nate, Correct there is no Researcher Union, no collective voice. Yet ultimately without us there is not much besides tenure and promotion to drive the need for any of these systems, digital or physical.

Federated Discovery is already there if one pays for it – Web of Science, Scopus, Dimensions. And free with less search options (no Boolean or truncation) through Google Scholar. Of course each of them have different research articles in their collections.

What are you looking at that is different than these solutions?

Access? (Also, those sources definitely don’t cover books or book chapters very well!)

Agree that discovery isn’t access. But he was talking about discovery tools.

Most of the tools have links to the full text. After one discovers that an item exists, then a user has to get down to the complexity of open access, institutional access, embargoes, and pay for access, which is a different issue than discovery, and needs to be solved to make researchers lives easier too.


Discovery services are also available for free – Google Scholar for example. All these tools are excellent for some purposes but may be limited for others, for example in terms of the scope of the index against which they search, as Lisa mentions with books.

The links to the full text that you mention have been an enormous continuing problem in many cases, which services like Seamless Access and GetFTR are attempting to address.

It may be that over time we develop a fantastic set of workarounds to the fundamental underlying problem, which is that a plethora of publisher-specific content delivery platforms is misaligned with the research process.

Thanks Roger for highlighting these challenges in the research ecosystem! It resonates to a great extent with this article:, where we at ChronosHub outline our approach with the same underlying recommendation of being much more researcher-centric.

It’s indeed a really tricky challenge, as it requires collaboration between a lot of parties. A publisher could make their own systems better, but they still need to communicate with the universities’ systems, and funders’ systems, otherwise the researcher would still need to manually report back to funders and manually enter data in the university’s institutional repository. With researcher-centric, we would as industry need to take a holistic perspective and collaborate across the different stakeholder groups (e.g. for the whole author journey), and I guess that’s what your implying through your article, not just that each publisher and library need to make their own systems more researcher-centric?

Our approach as intermediary serves to facilitate this communication and automate the flow of information between publishers, funders and institutions by helping each stakeholder to get their own needs met, while keeping the overall focus on unburdening the researcher. Institutions get access to all their data in one place for their reporting, auto-populating their repository and getting their APCs paid, across all(!) publishers. Publishers get a streamlined billing & collection across all institutions & funders AND better services to their customers (researchers, institutions and funders). Funders get their reporting, ensuring compliance with their policies, and their APCs processed AND helping their grantees stay compliant and not worry about payments or reporting.

The aspect that you describe in your article, about one place for submission to all journals, is another ambition that ChronosHub tries to enable by making a Journal Finder (beta) and submission portal ( available publicly and completely free of charge. It offers integrations for instance with Editorial Manager, ScholarOne, eJournalPress and a number of custom built submission systems (e.g. Frontiers, AAAS, etc.), and thereby achieves some of the desired author convenience you outline. However, it does not yet cover all active journals. Even though more and more publishers now activate the integration and contribute with keeping their journal information and institutional agreements up-to-date in ChronosHub, it’s for many a slow process and it would be most interesting to hear your and others’ views on how this kind of process could be further accelerated. 🙂

As there are many “journal finders” out there (JCT, Journal Masterlist, Sherpa, DOAJ, institution specific, publisher specific, etc.), it would be great to hear from this audience what suggestions there may be for further ways to collaborate and more jointly work with data collection and sharing.

On the difficulties of providing effective access and discovery systems, I’ll add that libraries have a longstanding belief that it is part of our information literacy mission to teach users to use library discovery systems, even if those systems are flawed or idiosyncratic. Despite all of our progress in building one-stop federated search interfaces we still tend to be too accepting of systems that simply don’t work well– or rather, accepting of our inability to change them– because we think that researchers *should* learn these systems as part of becoming information literate. This has always been true to some degree, even back when researchers relied on paper-based tools; to become a serious researcher one learned to decode the mysteries of paper indexes or reference books. We feel a sense of moral failure if students dump library tools in favor of Google Scholar the minute they leave the library instruction session (I’ve even seen students switch to Google Scholar in the middle of a session, while I was teaching them about the library’s federated search tool), as though we’ve failed to prepare them to be serious researchers. That value system has waned but I think it’s still there to some degree in our thinking.

I really appreciate your walking through the value system that has informed how libraries tend to engage on these issues. I agree with you that it has been a source of limitation. Engaging user needs and practices realistically in a competitive marketplace for information services increasingly informed by consumer experiences is no small shift in thinking!

For myself, I wish the resources devoted to teaching students how to use systems that are malapted to their purposes were instead being put into helping students “search for truth amid a sea of lies,” as Lisa and I recently wrote…

Roger while reading your Access and Discovery section, I kept thinking isn’t this what domain specific indexing services did for me way back in the olden days of paper or maybe Dialog would help me with?

Yes but as I said above, instead of Dialog, we have Scopus, Web of Science and Dimensions. But like Dialog, they don’t provide access to full text.

All well and good if you are affiliated with an institution. Not all scholars are.

Cecilia, I definitely think you are on to something. If domain specific indexing services had become domain specific content discovery/delivery/access services, with all content from all publishers in that field, it would have gone a long way to meeting researcher needs! Since that would have become a digital hub for all kinds of services in a given field, it would have been an excellent place for a “common application” for journals among other things. Scholarly societies really should have been leaders in pursuing this vision, but I’m not aware of many that have tried to do so.

“An author would like to have a single set of requirements for submissions, for example with a single reference format”

Now there’s room for a valuable baby step. Do publishers and societies really get any sort of brand identity from their idiosyncratic reference styles? Thousands of slight variations that all have the same function, but provide endless annoyance to researchers and editorial offices alike? EndNote ships with 2800+ reference styles and most that I’ve used still required some corrections or further customization. Seems like about 4 styles could suffice: author-year; numbered; and the numbered footnotes or endnotes styles favored in many law and humanities journals.

This is an excellent example where process and policy change is needed first before technology can be used to greatest efficiency. Dealing with 2800+ reference styles is a perfect illustration of the kinds of requirements that are driving systems development. Rather than focusing those resources on building systems that actually benefit researchers.

David Crotty’s response to Roger Schonfeld’s excellent post was disappointing. To the charge that publishers don’t provide, with their publisher-specific websites, what researchers need, his answer was simply “if these things were valuable to researchers and their institutions, they’d be willing to pay for them and the market would provide them.” Compare the situation with electric chargers for cars: currently there are at least four types of charger, and four types of connector. Is this what the customers want? I don’t think so. But the provision of electrical chargers is a ‘free’ market, meaning that providers are free to provide whatever non-compatible or proprietary device that suits their business goals. So Tesla provides chargers for Tesla users, and so on. I’m sure the customers would be willing to pay for a universal charger and connector, if one was available.

If you are “sure the customers would be willing to pay for a universal charger and connector, if one was available,” you should start that business right now. Don’t complain about what others don’t do. Do it yourself. Make lots of money, make the world a better place.

Hi Michael — I agree that my response was a bit curt. I copied it over from a longer email exchange the author and I had while he was developing this piece. Let me expand.

As noted, I was channeling one of our regular contributors, Joe Esposito, who has weighed in above. This is a common response from Joe when someone claims how things “should” be or what products “should” exist — that’s a great idea, go build it and if you’re right, the market will reward you.

In the cases discussed in this post though, a universal discovery/access point for research papers and a simplified/unified article submission system, the issue hasn’t been that they don’t exist. It’s more been that where they’ve been tried they haven’t succeeded and/or they’re really hard to effectively build and run.

For discovery/access, there are a ton out there ( has a long list as does So far, none seem to have caught on particularly well with users. Further, the services that do seem to have a lot of use, namely Google Scholar, ResearchGate, and Sci-Hub, don’t seem to have sustainable business models. The first (as far as I can tell) is a side project at Google, not meant to pay for itself. The second continues to churn through investor money without a recognizable business model, and at this point exists to be purchased by someone else. The third is a criminal site, alleged to be supported by nefarious sources. There doesn’t seem to be enough of a desire to have this sort of service among end users to warrant paying for that service, which I think explains why there isn’t a huge level of investment in it. Where companies and libraries have built their own systems for their patrons, they sit largely unused, so what does that suggest to you?

For submission systems, there is also no shortage of them on the market. I think the desire is there for simplified systems, but as I noted in my original comment, I’m not sure it’s achievable (at least, no one has cracked that nut yet). And again, I don’t see a lot of the end users willing to pay for such a system. At best, it would be purchased by publishers and the costs incorporated into subscription prices or APCs, so the actual users of the system would not be paying for it. And that makes things more difficult as businesses tend to hone their offerings toward the people who are actually giving them money.

Comments are closed.