An emerging duopoly for an entirely new class of products, those that support research workflow for the sciences, could marginalize other publishers large and small and lead to the Big Two moving ahead of their increasingly distant rivals. Elsevier and Digital Science are in a race to build out a complete set of these research workflow tools, crafting portfolios that can be integrated in a variety of ways to suit the strategic interests of their owners, positioning them far earlier in the research process than traditional publishers. This leaves publishers not in the workflow game at risk of being left behind.

For every publisher without a complete set of workflow offerings, the marketplace is shifting under your feet. Even for a large scientific publisher — Wiley, ACS, IEEE — the competitive dynamics are shifting in real time. Competition is no longer just a matter of working against the greater scale that the largest publishers can bring to bear — and this is hard enough given the dramatic economies of scale in digital publishing. No. This competitive environment is qualitatively different.

Publishers not in the workflow game may find themselves victims of this entirely different category of products.
George Stubbs, A Lion Attacking a Horse, Yale Center for British Art, Paul Mellon Collection.

Workflow market dynamics

Elsevier and Digital Science are building their portfolios through majority investments, outright acquisitions, and internal development. This does not necessarily mean that they are pursuing an identical strategy to one another.  

As I have written previously, a publisher seeking to bolster its existing operations with a workflow that reaches back earlier in the research process, well before publication, has particular strategic interests and inclinations. For example, in the article processing charge (APC)-driven open access environment that has emerged, the supply of articles and therefore the ability to influence authors and/or their institutions is of increasing value strategically. The ability to reach back earlier in the  research workflow has clear value to Elsevier in sourcing articles and building author relationships. (There may be other aspects to its business strategy as well, given that it describes itself as an “information analytics” company, which may evolve depending on the marketplace and its strategic needs.) The need to ensure a steady source of high-quality article content in a shifting open access environment alone helps to explain why Elsevier has been willing to pay the purchase prices necessary to acquire key components of its workflow.

But Elsevier is not the only party playing at this business. Digital Science is a curious case, because some observers may see reasons to view Digital Science as strategically indistinguishable from Springer Nature, in which case it might over time be expected to pursue a similar strategy to Elsevier. But since Digital Science itself denies that there will be any merger into Springer Nature, we must also allow for the possibility that it will continue to try to build an independent business.

Here I am not trying to forecast the future but rather propose planning parameters for other stakeholders. It is not unreasonable to suggest that other publishers should examine their prospects in the context of a kind of duopoly, Elsevier and a merged Springer Nature / Digital Science. Each already has or may soon emerge with major presence in both traditional publishing and gold open access, each with strong “apex” journals in their field and a “cascade” underneath it to catch a full sweep of publications, and each with a strong array of research workflow tools.

It is also possible that Digital Science will continue in its attempt to build a standalone strategy, albeit with a very different path towards monetization. In that case, there would be two major scientific research workflow providers, only one of which is tied directly to a publisher’s interests.

Of course in either scenario, they will face competition from others, such as the Center for Open Science and Clarivate. But given the amount of consolidation we have seen among workflow providers over the past several years, it is incumbent upon other publishers to examine what it means for them.

Losing access to authors

Let me offer an analogy. Think back to Google’s strong position in search when the desktop browser was the dominant interface to the internet. Yes, there were the so-called “browser wars,” with different companies seeking quantitative advantage (measured by the number of users), but these were in retrospect border skirmishes at most. Then along came the iPhone — and the competitive environment for search and advertising was at once qualitatively different. In this environment, the connection between individual and information became orders of magnitude more intimate. As the foremost consumer advertising platform, Google feared it would lose direct access to individuals. Google saw this as a mortal threat.

The dynamics faced by publishers are in some ways analogous. Workflow tools position their platform owners far earlier in the research process than traditional publishers, with attendant risks to article supply and author relationships.

With the SSRN and Digital Commons preprint services that Elsevier has been purchasing, there is ample potential for connections with article submission and review. Here, we have seen some responses from smaller publishers, such as AGU in partnership with Wiley and a group of chemistry societies in partnership with Digital Science’s figshare. There are major advantages to those that can gain control over this stage of the workflow. Preprints are definitely an earlier stage in the dissemination workflow than final publication, but they are comparatively late in the research workflow.

Workflow tools position their platform owners far earlier in the research process than traditional publishers, with attendant risks to article supply and author relationships.

Earlier workflow tools are at the heart of the scientific research, funding, and research management processes. Workflow tools help researchers and their universities seek funding, provide assistance with data gathering and analysis, support laboratory management and collaboration, and so forth. These workflow tools are positioned far closer to researchers and authors than are journals or even preprints.

And this lays out at least one set of stakes for other publishers rather clearly. The prospect of losing direct access to researchers — and perhaps for many publishers even more pointedly, authors — is not trivial.

By pointing this out, I do not mean to suggest that any workflow providers intend to cut off content sourcing to publishers. I simply mean to suggest that it is a risk over time that others should bear in mind. This is the new competitive dynamic you face if you are a publisher not in the workflow game. And it is at this point that, regardless of what Digital Science may wish for itself, I am obligated to remind readers that Springer Nature’s outgoing CEO has very much wanted to recombine with Digital Science, perhaps exactly for this reason.

To stay competitive in the advertising business in the transition to mobile, Google had to ensure it retained direct reach to consumers. Its solution was to enter the smartphone market, acquiring and developing Android as a counterweight to the iPhone. While the iPhone remains significant in the US, in all major global markets Android has been an enormous success in its own right. But more importantly, from a strategic standpoint, is that Android has provided competition to the iPhone that prevents Apple from locking down its own platform quite as much as it might otherwise have been able to do.

While iPhone and Android may offer an appealing analogy, we cannot develop strategy by  analogy. In my next post, I will examine some of the strategic options for those publishers large and small that are at risk of being left behind in this emerging environment.

This piece is adapted from a talk I gave this fall to IEEE and its Advisory Library Council. I am grateful to Michael Spada for inviting me to speak. I thank Angela Cochran, David Crotty, Joseph Esposito, Lisa Hinchliffe, and Bill Park, for discussions and suggestions that helped me develop and improve my line of reasoning here.

Roger C. Schonfeld

Roger C. Schonfeld

Roger C. Schonfeld is the vice president of organizational strategy for ITHAKA and of Ithaka S+R’s libraries, scholarly communication, and museums program. Roger leads a team of subject matter and methodological experts and analysts who conduct research and provide advisory services to drive evidence-based innovation and leadership among libraries, publishers, and museums to foster research, learning, and preservation. He serves as a Board Member for the Center for Research Libraries. Previously, Roger was a research associate at The Andrew W. Mellon Foundation.

Discussion

13 Thoughts on "Workflow Strategy for Those Left Behind: Strategic Context"

Great post. Another aspect of these workflow systems, and another advantage to leveraging them, is having data and metadata earlier, prior to publication (or “publication”). Knowing what’s coming, who is working on what, and new data are other potential advantages to moving these businesses upstream in the workflow.

It’s not just about acquiring papers early, but acquiring the metadata and data of papers early. With machine learning, AI, and other techniques in the offing, this may be a second-order strategy, with APCs and workflow tools paving the way to that pivot.

Thanks Kent. I absolutely agree that there may be more to this strategy for a publisher than simply sourcing article manuscripts, especially with Elsevier trumpeting itself as an “information analytics” company. We will see just how substantial this pivot will prove to be, and how others are able to respond.

This is a very interesting analysis (we have come to expect it from RCS) but I wonder whether it is not missing the bigger picture. And in much the same way Elsevier and Springer may be missing the bigger picture by trying to re-inforce or build out their inherited, but under threat, control of ‘publication workflow’. The key battles may already have moved along, and traditional journal-focussed workflows are perhaps already a backwater. Data science, gene editing and Machine Learning are already much more disruptive and fruitful areas for science and research budgets. We do not know what is coming down the track but we may expect to see traditional workflows comprehensively disrupted by the needs of these technologies which will summon and channel new modes of validation and more open forms of data availability. The chance that Elsevier or Digital Science have picked the right horses in this multi-dimensional race seem to me quite small.

Thanks for your generous comment Adam. I absolutely allow for the fact that there is more to this strategy than simply article sourcing – I referred to it briefly in mentioning Elsevier’s stated role as an “information analytics” company. They may get there in time, or perhaps as you suggest others will beat them to it. During this transitional period, they have a substantial ongoing business to defend and develop. As we know from other sectors, sometimes this is a major advantage, but more often than not it is an impediment.

Interesting article. I especially agree with the comment about “the prospect of losing direct access to researchers […]” as not being trivial. With the analogy of Google entering the smart phone market, do you imply that all publishers should get into the researcher workflow market? Or that smaller publishers should collaborate in new ways to create a counter-weight to the Big Two?

If we look at other domains, like film and music, the original producer is also very rarely in direct relationship with the end consumer. Spotify, Netflix, Amazon and the likes increasingly take on the role of the direct relationship. There is however still a need for excellent producers. If we look at publishers, they have a more obvious direct contact with authors during the pre-submission and submission workflow, but not necessarily with the researchers as readers. Do you see that as a problem publishers should try to tackle by investing in workflow tools? Or maybe publishers need to see this as a fact, and that they critically should review their own role in the research information ecosystem?

It would be interesting to see you elaborate on this and the analogy of film and music, but maybe its already planned in your follow up article?

Thanks for these observations and questions. In the case of film and music, creators are a far smaller group than consumers. In the case of academic content, the creators and consumers overlap to a far greater degree, especially in the sciences. So I do think there are some real differences.

Without giving away tomorrow’s post, I think it is fair to observe that the question a record label would ask itself, if it could write its strategy again before Napster got on scene, is this: How can a record label (with exclusive access to a subset of producers) transform itself into iTunes and then Spotify (with something closer to exclusive access to consumers)?

The large overlap of creators and consumers … but also gatekeepers (editors and reviewers) and, perhaps particularly vexing for publishers, also the violators (e.g. Sci-Hub and ResearchGate). All in all, quite different than Napster, et al.

I am sure there exists a blog about this already, but in the event there is not or that the blog entry is quite old at this point, I wish one of the chefs would write an entry about these business and technology developments, mergers, and trends from the point of view of a small–to–mid-sized society publisher, one that continues to self-publish and has to partner with multiple vendors for peer review, production, Web hosting, and post-publication business analysis. Societies such as the American Society of Hematology and those aligned with FASEB walk a fine line between relying on and bolstering their long-held reputations as publishers and creating strategic partnerships that can result in broader reach at the expense of internal downsizing and sole branding of their content. Aside from continuing to pay strategy consultants or selling off product to a larger publisher, how is such a society to read the tealeaves and retain its identity in this marketplace? Much gets written here about Springer Nature, Elsevier, and ACS. But what about the smaller players that remain independent? They can’t build portfolios to the same volume or variety, nor can they engage in corporate-style product development without the financial resources for R&D and implementation. There are many for-profit firms chomping at the bit to get a piece of our nonprofit societies’ resources, and of course we follow what we can in order to stay in step and not to drown. I would like to know what a chef with a view from this vantage point can share with us, something beyond the presentations made at SSP or at large publishers’ meetings.

An interesting analysis. However, I must correct one point here: Digital Science is not a publisher. We do not publish content, we do not have authors. So when you write that focusing on workflow is “positioning them far earlier in the research process than traditional publishers”, the “them” only applies to Elsevier.
As Daniel Hook recently wrote in response to your post on ‘Who Owns Digital Science”, we are separate from Springer Nature and there are no plans to change that. So not only should people “allow for the possibility that [Digital Science] will continue to try to build an independent business”, they should be aware that this is indeed what we are doing (and not just trying :). Further, to assume that our business is guided by maximising the value of a single publisher’s content is to fundamentally misunderstand our long term strategy. When Springer and Nature announced they were merging in 2015 and Digital Science was carved out of the Macmillan structure, we quickly notified our publisher customers and content partners to reassure them we would remain independent from the merged entity. This is critical because one key aspect of our strategy is working with our 200+ publisher customers to help them make their content accessible to researchers further upstream in the research workflow. One can’t effectively do that if one is tied to the content, or revenue stream, of any one publisher.

Otherwise a very insightful and timely piece.

Thank you. I was very careful to set out Digital Science’s position on this matter in my piece. And also the reasons why publishers might wish take that into account, but only to so great a degree, in their own planning.

Very often the nuance gets dropped from discussion as the message filters out, so let me just make one thing real clear – there’s no lock-in with Elsevier services.

If you’re using Mendeley to write your paper or Mendeley Data to store your data, you’ll be able to submit it anywhere. Plum Analytics includes the best indicators of research attention & impact, regardless of what platform those indicators are generated on. Interoperability is core to how Elsevier wants to operate & support researchers in this ecosystem. Here’s an example of how we incorporate the FAIR principles in our work: https://www.elsevier.com/about/press-releases/science-and-technology/elsevier-and-seven-bridges-receive-nih-data-commons-grant-for-biomedical-data-analysis

Certainly not lock-in in the sense that one is literally unable to escape. I don’t doubt the intention to make a scientist *able* to submit a manuscript anywhere. That certainly doesn’t mean it be a more seamless experience to stay in the ecosystem, or that there are preferential terms or benefits one can receive by staying inside the ecosystem.

One of the issues that I’m seeing with a lot of the services currently being offered to publishers is the quantity of sensitive data that is required to be given to the service provider, which turns their offering into something like a Trojan Horse. In order to get service X, you have to give provider Y access to all of your authors, submissions, decisions, etc. — basically the core data around which your business runs. The provider then gets to use that data for their own purposes, either to fuel their data provider business ambitions, or even worse, if they’re a competitor, to feed their own strategy. That’s the sort of lock-in I worry more about, here’s a service that you absolutely have to use in order to stay alive, but by using it, you’re giving your competitor the information it needs to put you out of business.

Comments are closed.