running race
Christine Arron (left) wins the 100 m at the Weltklasse meeting. Image via Beat Küng.

It’s three years now since I posted “The Inexorable Path of the Professional Society Publisher” and I thought it was time to revisit it. What’s always troubled me about that post is that it takes the view of the underdog — that is, of the small or mid-size professional society publisher — which is struggling to remain competitive in an environment in which administrative costs explode, the budgets of customers are flat and sometimes declining, and libraries continue to invite consolidation among vendors in order to reduce administrative costs. While few journals truly lose out entirely in this environment (they more likely get absorbed into other companies in some fashion), the fact is that some journal publishers win bigger than others. Let’s tell the tale of the winners this time, those publishers that remain independent and even grow, some magnificently.

Parenthetically, talking about winners is so unfashionable in scholarly communications today that it is something of a struggle to find the vocabulary for it. Everyone is crying in their beer. Researchers can’t get access, librarians are out of money, publishers are struggling to find new markets, and the whole world is going to hell in a handbasket. Meanwhile, the world gets better every day, and the research community, including the libraries and publishers that support it, are in part to thank for it. I wonder if I am the only person who will read this Kitchen post who knows people who are alive today because of medical technology that did not exist 20 years ago.

But the winners! What about them? Aside from the small publishers that cleverly and effectively occupy a niche (very common in humanities publishing–and, to spare anyone from the need to make this comment, even for monographs!), the winners in journal publishing fall into three broad strategic categories.

The Leviathan Model. We should begin with the Leviathans because they are the other side of the troubled world of the small society publishers. There are basically four of these–Elsevier, Springer Nature, John Wiley, and Taylor & Francis. Two university presses (Oxford and Cambridge) play this game in a smaller pond — call theirs the Dolphin Strategy — but they play it well — trumpeting their not-for-profit status, as they should. Wolters Kluwer plays the game with a deep vertical focus, and Sage strives to join the Big 4 as a veritable #5. Beyond that the remaining publishers, even the larger ones, work with a different strategy. The Leviathan strategy is that of the aggregator, a provider of so much scholarly material that libraries have to pay serious attention to it. This is the key winning strategy in the industry today. Leviathans try to roll up as much content as possible, the better to stare down increasingly tough-minded consortia buyers and to squeeze out the smaller publishers, which are denied access to library budgets. There is a very important point to be made about the Leviathan model: only a very few publishers can play this game. I am myself impressed by the management discipline at Sage, OUP, and Cambridge, which has enabled them to expand their offerings when even a decade ago there was some reason to believe that the marketplace was going to pass them by.

The Community Model. For all the sturm und drang incited by the Leviathans, the fact is that successful journal publishing does not stop there. There is a large contingent (by some estimates, as many as 50) professional societies that remain independent and deliver impressive results. The obvious names to invoke here are ACS and IEEE, but there are others, and we should expect their representatives to let us know about them in the comments to this post. A community is defined not only by whom you include but also by whom you exclude, and the community-oriented publishers are careful not to try to please everybody. The core strategy of the community model is to anchor the publishing program in the society membership itself. Members submit articles to the society’s journals, review articles for the journals, and may serve as editors. They also read the journals, either through direct subscription or through the access provided by their employers. Such members often prove to be valuable in bringing the society’s journals to the attention of the appropriate member of the library staff.

Here it should be noted that the community publisher is wholly different from a publisher with a journal without a community supporting it. Indeed, one of the curious things about scholarly publishing today is the enormous growth in content that is not anchored in a scholarly community. The open access movement is in part responsible for this, as it reduces the complex set of professional and social arrangements surrounding scholarship to the single issue of access, but commercial publishers have also been instrumental in stripping away the community dimension from the published article. The Community model, however, views content (articles, books) as a single aspect of a suite of professional relationships and communications. Content that is so anchored is very difficult to displace, as authors who, say, decide to submit an article to a publication outside their society are abandoning not only the journal but their colleagues.

The Prestige Model. There is a very small number of publications–a very, very small number–whose prestige in their fields is so great that few, if any, prospective customers can fail to purchase them. It is a club with a tiny membership, with Nature and Science sitting atop the heap. But there is a handful of others and about them it is safe to say: They know who they are, and so do we. Publication in one of these journals is often believed to guarantee a successful career. Librarians subscribe to these publications because they want to and because they have to.

One characteristic of the Prestige publishers is the sheer number of submitted articles, which means that there is a very low acceptance rate. This in turn leads to cascading models, which can be highly efficient economically. Publishers in this category may suffer from an endemic problem, that with so much prestige, it becomes hard to innovate. A Prestige publisher may be strangled by its own reputation.

Beyond these three categories there are, of course, many successful journals, and we are now seeing the possibility of other models becoming more significant. Most obviously we have the megajournals pioneered by PLOS ONE, though that publication appears to be resting right now; but we also have a great deal of activity among smaller societies and with new entrants such as library publishing. There are more routes to success than simply to sign a deal with one of the Big 4.

What should be clear, though, is that the successful publishers that fall into these three categories are in a league of their own. It is difficult to imagine them being seriously challenged, and it is difficult to imagine scholarly communications without them. For an upstart to crack this league would require a new approach to the marketplace and truly compelling leadership.

Joseph Esposito

Joseph Esposito

Joe Esposito is a management consultant for the publishing and digital services industries. Joe focuses on organizational strategy and new business development. He is active in both the for-profit and not-for-profit areas.

View All Posts by Joseph Esposito


21 Thoughts on "Winning Strategies for Journal Publishers"

There is another interesting dimension to this. I regard Science as a news magazine and a science translator, not as a journal. The few journal articles it publishes are of little interest to me because they are seldom in my fields. How do other so-called journals handle this mix? Perhaps we are not just talking about journals.

  • David Wojick
  • Mar 14, 2016, 10:36 AM

Completely agree with you! Science and Nature are magazines but not scientific journal properly talking. Their strategies are build on media buzz and historical positions when there were not so much journal around there, much more on real scientific contributions.
The problem is not in journals or publishers but in researchers and authors themselves who give importance to the packaging more than to the real product (content), which is by the way a nonscientific approach.

  • Mike D
  • Mar 14, 2016, 3:51 PM

I never said anything the sort, so we do not agree, completely or even partially. I am merely making an observation about the nature of the content. Your response is what I call off the wall, a strange but apt metaphor.

  • David Wojick
  • Mar 14, 2016, 5:29 PM

I never understood and will NEVER understand the fixation on the following ‘prestige’ in the scientific field?
Can anyone tell me why and how a paper is published here or there will garantee more successful career than the same paper published in another journal?
One can publish a ‘bad’ paper in the most ‘prestigious journal’ but this won’t make it a good paper and inversely, one can publish an excellent paper in a small or unknown journal but it will remain good, and it would be proven with time.
By the way, what ‘prestige’ would mean? Is prestige scientifically valid? Is there really any difference between the page of journal X or Y other than the idea expressed in the paper?
Isn’t the content of a paper that should matter but not the journal name?
I am very surprised, even shocked, to see how some so-called ‘scientists’ attach more importance to the name of journal than to the content!
This looks like if the packaging of any product will change its real value!
It does NOT change anything, in reality.
I sometimes wonder about the reliability of ‘scientists’ who believe that having a paper in Nature or Science is the ‘ticket’ for success!
I sometimes open the pages of Science and Nature and I do not find articles of exceptional values or importance, but sometimes just to fill in the pages!
When I see such obsessions to publish in ‘prestigious journals’ I understand why the scientific field is getting sick.
Prestige does not make any valid sense in an objective field such as the scientific one.

  • Mike D
  • Mar 14, 2016, 3:32 PM

You are placing blame in the wrong place. Scientists are acting rationally when they target their articles to “prestigious” journals because they are working in a system that values publication in those journals for career advancement and funding. Authors are doing what they have been told to, by funding agencies and administrators (both of whom are often fellow scientists) that this is what they must do to have a successful career.

If you don’t like it, then you need to direct your wrath at funders and those making career advancement and hiring decisions, not those who are subject to them.

  • David Crotty
  • Mar 14, 2016, 4:21 PM

So, your argument is; if the system is defective and ‘rotten’, scientists should take part in it but not to reform it and improve it?!!
The system you describe is incredibly flawed, whoever the player, researchers, administrators, authors or any other, no matter who is.
Scientists should deprecate the flaws and defects of their system, and to try to improve it but not to amplify the side effects.
Sorry, but your argument does not make much sense. If the system is flawed, this does not make it valid, and should not justify distorted practice, otherwise what is the role of scientists?
If scientists are not able to make their system fairer and more objective, what do they do, then? Just to manipulate a bunch of seeds or cells or playing with machines?
What a disgraceful mission!
Can you tell me, from a rigorous scientific viewpoint, how an article in Nature or Science should be more valuable than in another journal that follows the same peer-review process and who reviewers could be reviewers for Nature or Science at the same time?
Take another example, you have a milk, bread, apple or any other product; will the inherent value of your products (bread, milk, apple…) differ according to the shop you sell to?
It is exactly the same for papers. It is the content but not the packaging or the journal name that will make papers worthwhile or not. Think about it from a scientific viewpoint but not from biased media buzz.

  • Mike D
  • Mar 14, 2016, 5:29 PM

The problem is that this is not a rigorous scientific question, thus a rigorous scientific viewpoint is irrelevant. It is a social question, the moral equivalent of carrying a Beatles lunch box in my day. Because it is silly does not make it less pertinent to human affairs.

  • Joseph Esposito
  • Mar 14, 2016, 5:32 PM

So, if it is silly, should we accept it and go with?
Many things in life are misused for human affairs, should be accepted morally and socially?
Quote: “Because it is silly does not make it less pertinent to human affairs.”
You seem justifying silly issues because they can give some interest to some people.

  • Mike D
  • Mar 14, 2016, 5:59 PM

It’s called culture.

  • Joseph Esposito
  • Mar 14, 2016, 6:06 PM

Mike, I made no such argument. I only pointed out that you are blaming the wrong person, the victim of the system rather than those responsible for putting the system into place. Please read more carefully.

  • David Crotty
  • Mar 14, 2016, 5:34 PM

I am blaming nobody but the biased system, and wondering about the validity of criteria used in a field supposed to be objective and devoid of any bias.
If Nature and Science are ‘prestigious’, what prestigious would mean? On which basis they are prestigious? Isn’t on the impact factor? Is the impact factor is a reliable measure?
In this kitchen, we read many interesting posts that show the many defects of the impact factor. So, how can we criticize the impact factor from one hand, but justify it from another to rank journals and say this is prestigious and the other are not?
Science (knowledge) is different from Culture. Not because something is accepted culturally, is will be valid scientifically.
Many people smoke, does this make smoking a ‘healthy’ culture or behavior?

  • Mike D
  • Mar 14, 2016, 6:20 PM

If my store rejects 90% of the apples offered while your store rejects just 10% then my apples will be better. Moreover, if the apples are first brought to my store, then to yours, they wil be even better.

  • David Wojick
  • Mar 14, 2016, 5:39 PM

Publications are not commodities like apples, as you know. I will not argue this point further. Have a nice day.

  • Joseph Esposito
  • Mar 14, 2016, 6:19 PM

I like your apple analogy but I think it is faulty:
The apples in your store will only be better if you have somebody who knows how to pick good apples. Individual apples in your store might in fact be mediocre or even bad and worse than many of the apples in Mike’s store.

That is one of the major problems of scientific publishing today and how universities/research institutions award (the best) jobs: The assumption that the quality of each individual article in a journal (or the quality of a monograph with a certain publisher) correlates with the overall quality of the journal (which leads to impact factor and such nonsense) when in fact each article needs to be assessed on its own. However, I am aware of the practical problems, i.e. looking at each article in your field is simply too time-consuming, therefore researchers might ignore what is in fact the better apple/article because it is offered/written in the wrong store/journal.

  • Martin Hermann
  • Mar 21, 2016, 4:56 AM

The apple analogy presents a simple mathematical decision model. As such it includes many assumptions, equal skill in selection being one of them, as you correctly point out. It would be interesting, maybe even useful, to build a more refined model of the ranking system that the journals collectively provide. Especially since it is argued by the publishers that this ranking function is one of the system’s highest values.

On the skill issue, note that it may well be that the higher ranked journals can attract, or select, more skillful reviewers, hence have more skill. Given a good math model we could play with that concept to see where it leads.

As for reading the papers, which is itself a common argument, my impression (which may be incorrect) is that promotion and tenure decisions are typically made by committees whose members may not be experts in the field of the people being assessed. That is, people in different departments in effect compete for tenure, perhaps even across the entire institution. If so then reading the papers is simply not an option, so indicators like the journal rank of publications are required. This too should be in the model.

The point is that there is probably nothing nonsensical about using the IF in institutional decision making. But a good decision model might test that question.

  • David Wojick
  • Mar 21, 2016, 7:55 AM

I see no blame to be placed. Decisions must be made, so ranking is necessary.

  • David Wojick
  • Mar 14, 2016, 5:33 PM

I am one of your friends, Joe, who would not be alive today but for the advance of medical science. I carry two stents in my heart installed in 2005. Without that technology, I would be dead.

  • Sandy Thatcher
  • Mar 14, 2016, 8:51 PM

Good article, but the Leviathan and Community models are not mutually exclusive. Ten years ago, my small tight-knit association — now in its 52nd year — placed our struggling independent journal with a Whale. The Whale brought us efficient publication and far greater exposure internationally, resulting in a higher Impact Factor and expanded readership. Our association’s journal and technical program remain tightly integrated, with our conferences leading to many fine articles. I served eight years as editor-in-chief under this arrangement, received much good advice and support from our Whale, and never felt any loss of editorial control. I suppose I can’t speak for all Whales, but I know the hybrid Community/Leviathan model can work.

  • Ken Lanfear
  • Mar 15, 2016, 9:03 AM

Comments are closed.