Around the World in 80 Days book cover
“Around the World in 80 Days”, 1873 edition. 80 days? Pshaw, let’s do it in 30 minutes.

Editor’s Note: I was recently asked to present a “news roundup” at this year’s US International Society of Managing and Technical Editors (ISMTE) meeting. Rather than just reporting the news, I took it upon myself to do more of a “state of the union” address, essentially distilling the last few years of The Scholarly Kitchen down into a half an hour talk. (Or, at least, what I’ve learned from The Scholarly Kitchen over that time). And like my fellow bloggers (here and here most recently), I like to get as much mileage as possible out of any effort, and so have converted the transcript of my talk into a blog post. I apologize in advance for the length — it was a 30-minute talk, and so is a bit longer than our usual posts, so feel free to abandon it when you start to nod off. Slides are at the end of the text if you wish to follow along.

As an industry, we are inundated by threats and opportunities, both real and imagined. There are hordes of startup companies desperate for high value online content that are trying to intermediate themselves into our processes or cut us out altogether, and a seemingly infinite number of advocates, each fighting for their own particular cause. Because we all have limited time and limited funds to put toward any one effort, we need to do an enormous amount of filtering and figure out where we need to put our attention.

Traditionally, publishers are pessimists. “The sky is falling” seems to be the theme of an awful lot of publishing meetings, and Chicken Little (Henny Penny if you’re British) is regularly the keynote speaker. There’s a very old joke about how the first book off of the Gutenberg printing press was the Bible and the second book was about the decline of the publishing industry.

Despite this pessimism, scholarly publishing continues to thrive, to grow and to experiment with new ways of communicating. Scholarly publishing has moved into the digital age more gracefully and more successfully than nearly any other media. Compare where we are with the current state of newspapers or magazines. They’re struggling for their very survival while we’re expanding and experimenting with new business models, new technologies and new ways to better fulfill our mission. More people now have more access to more research than ever before.

I’m not sure whether all the publisher pessimism is central to that success, but we do devote an enormous amount of energy toward an enormous number of ideas and questions. A colleague recently suggested that publishing was subject to the famous Chinese curse, “may you live in interesting times” (a saying which, I’m told, is not of Chinese origin and can be traced back to British diplomats of the early 20th century). We do indeed live in interesting times, with never a dull moment.

There is, however, a gap that emerges between the issues that publishers spend years arguing about at meetings or online, and the practical, real-world technologies and practices that emerge. That gap sometimes comes from losing sight of the needs and desires of the researchers who make up our authors and readers. Publishers can be so inwardly focused that they drift away from the things that the research community really cares about. We tend to listen to the loudest, and often the angriest voices from the research community, and often these tend to be edge cases that do not represent the needs or desires of the mainstream.

At many publishing meetings, there is a session called a “researcher panel” where a bunch of scientists or social scientists or humanities researchers get up on stage and publishers get to ask them what they think about publishing and all the cool new things publishers are doing.

The results are remarkably consistent. We ask the researchers, what do you think about this incredibly important issue that we’ve been agonizing over, having meetings to discuss, and arguing about on the internet for years? Their answer is almost always, “Huh, never heard of it. What is that?” Or perhaps most telling, “Why would I want to do that?”

Separating out what really matters is important. With that in mind, I want to give a quick rundown of the things that I think are likely to emerge as important and useful, the ideas to keep floating somewhere in the back of your mind.

Let’s start at a high level on the industry as a whole. We are in the midst of an era of consolidation. We continue to see mergers and acquisitions, and the biggest of publishers continue to get bigger. You’ll note that most now have names that are conglomerations of their former entities, Springer Nature, Wiley Blackwell, etc.

Scale is a driver of nearly everything in the journal publishing market. It is increasingly difficult to remain an independent or small publishing house. Big companies can do nearly everything cheaper, and often better, than small companies. A small publisher can likely afford one or two marketing experts, while a big corporation can offer a global team of hundreds of marketers.

Journals are no longer sold as individual journals to individuals, and less and less to individual institutions, but instead are sold as collections to consortia. “The Big Deal”, or buying all of a publisher’s content as a package, is the dominant economic force at play in the market. And of course, the bigger a package one can offer, the more weight it carries, hence the drive toward consolidation, and the smaller publishers or independent societies are getting squeezed out.

Along with consolidation, we’re just starting to see the beginnings of a move away from content as the thing that publishers sell, and toward the selling of services. In many ways, this has long been the case anyway.

We are a service industry — Journals perform a long list of services for researchers, meeting important needs that they have for things like communication, career advancement, and filtering out where you should spend your very limited time and attention. And doing these things, particularly doing these things well with rigor and care, costs money. Traditionally, we provide those services to authors for free and the costs are passed along to consumers of the literature through sales of the content produced.

Given that we are in an era where content (whether words, music, or images) is being devalued by the idea that everything on the internet should be magically free, publishers are starting to shift their business further and further away from relying on the sale of that content. Open access (OA) is a good example of this — the author needs services performed and they pay for them in a direct exchange, and the content is made freely available.

Elsevier is a company particularly worth watching in this space. In recent years they’ve acquired assets like Mendeley and the Social Sciences Research Network, and built their own powerful systems like Scopus. Digital Sciences, currently spun off from Springer Nature but possibly later to rejoin it after the company’s IPO, is another good example, with a publisher investing heavily in the things around the publications, rather than the publications themselves. An interesting recent thought from a colleague: the paper’s metadata may be more valuable than its content.

One sign of this loosening up of the grip on the material itself and a move toward services is the rise of preprints. I should probably put the word “rise” in quotation marks, since the use of preprints is nothing new in many fields like the social sciences and physics, but the last year or so has seen a sudden interest in preprints from the biomedical world who seem to think that they just invented the concept. For those unfamiliar, basically we’re talking about circulating an early draft of a paper before it’s submitted to a journal for formal review. Unlike the old days where you’d mail a copy of preprint around to colleagues, we’re now using digital technologies to expose them to the world.

This is breaking down the somewhat stale concept of the Ingelfinger Rule, the idea that journals will only publish material that has never appeared in public before. We’re moving away from the idea that journals are the sole point of dissemination and registration for a researcher’s work, and focusing more on the last 3 services on this list (see slide number 8, below), validation, filtration and designation, as the key services journals provide.

OA is of course another key service and a path to growth in the industry. It’s important to put that in perspective though—although we’ve seen enormous levels of growth in OA, it’s still a small portion of the market, around 4%.

In general, OA remains a low priority for most researchers. When you speak to researchers, they almost always think of themselves as authors, rather than as readers. As authors, access is not a primary concern. Here’s an annual survey from Nature, this chart showing responses for science researchers in how they choose which journal to submit their article to — top reasons are journal reputation and relevance, quality of peer review and Impact Factor.

Here’s the same survey results for humanities and social sciences authors, basically showing similar results, and a study just out from the Universities of California showed the same, with OA falling last on the list of priorities for authors choosing a journal.

So if OA isn’t a priority for most researchers, what’s driving the expansion of OA?

There are two real driving forces in play here, both economic. Library budgets are flat, if not declining. If you start a new journal, in order to sell it to a library, they have to drop a journal that they currently buy. That used to be fairly easy, given that there were lots of weak journals that could be dropped. But now libraries have really pruned all the low hanging fruit from their budgets, and many journals are tied up in “Big Deal” collections and can’t be dropped. So it’s really hard to get a new subscription journal into the market.

OA lets us serve the needs of the research community without further burdening the libraries. What it has done is open up new sources of support that reach beyond the library. Commercial publishers in particular, driven by Wall Street’s demands, have to continually increase their earnings. In a flat economy, they can do this by gobbling up more and more of the current market, hence all the consolidation. OA fits in here as route 2 (see slide 12, below) because it’s seen as a new revenue stream, additive to the current market, basically money from funding agencies and researchers, rather than libraries. And route 3 is the new services built around the content.

The other major driver for OA is demand from funding agencies. Funding agencies want to get the most bang for their buck, and they see OA as a great tool for better dissemination of the discoveries they’re funding. Many funders are instituting new policies regarding open or public access to papers describing research they’ve funded, and those with existing policies are now getting strict about enforcing them, withholding funds from researchers who are not in compliance. And where the funding goes, the researchers are certain to follow.

This map (see slide 13, below) shows a sampling of policies, and ROARMAP currently tracks over 770 funder and institutional access policies. Hopefully by the color-coding, Green for Green OA policies, Orange for Gold OA, you can see something of a consensus that has formed.

The strongest proponents of Gold OA, other than some small but wealthy private funding agencies, are in the UK and the EU. Given the recent Brexit vote and the likely economic upheaval we’ll be seeing for the near term, it’s unclear how much follow-through there will be on these policies, particularly because they’re proving to bring in significant additional expense, both in terms of having to pay to provide OA to their own authors while still having to pay for subscriptions to read papers from the rest of the world, and also because the policies have resulted in surprisingly high administrative costs for monitoring and compliance.

The big stumbling point for all of these policies is compliance. It costs a lot to monitor and enforce researcher behavior, no matter the policy. Institutions and funders have, for years, been trying to build repositories of their researchers’ work, and generally, researchers can’t be bothered to deposit the requested articles. The University of California system recently noted that the OA policy that years ago their own faculty voted to implement still only sees compliance from around 25% of those same faculty members.

PMC, the NIH’s repository, has a much higher compliance rate, but this is significantly driven by publishers depositing on behalf of authors. This brings us back around to the notion of selling services, rather than content. Elsevier is in the midst of a pilot program with the University of Florida, where basically the library is outsourcing their repository to ScienceDirect. Authors and librarians don’t have to do anything, and papers are added to the repository automatically. This is an experiment worth watching, as it could open up a new area of author services for all journals and publishers.

But how will those services work? You have 770-plus policies, and each paper has multiple authors, multiple funding sources and they often come from multiple institutions in multiple countries. It’s simply too much to track by hand. What we need to do is turn to alternative ways to process it and the way we’re doing this is through Persistent Identifers, or PIDs.

You’re probably familiar with DOIs, digital object identifiers we use to permanently keep track of a published paper. ORCID can be used to tag the paper to identify the researchers behind it and what used to be called FundRef, now the Crossref Open Funder Registry tags the paper with its funding sources. That gives us the paper, who wrote it and who paid for the work. The next missing piece is an institutional identifier, how do we tag a paper and a researcher with the place where the work was done? Was the research done at the University of York in England or York University in Toronto or York College in Pennsylvania? Right now there are more than 20 competing standards for this, so that needs to be winnowed down to one. We’re also seeing pilots for identifiers for the version of the paper (preprint, author’s manuscript or version of record) and the reuse rights available for the paper.

PIDs let us do an enormous amount of things that we never could in the past, or at least it makes things a lot easier. We can follow the flow of research better than ever before and create automated systems to handle the complex things we can no longer do by hand. CHORUS, the system built to provide public access to US federally funded research, is an example here, where compliance is handled automatically, without direct intervention by the author, funder or publisher.

If you’re not already integrating and working with these PIDs, I urge you to get moving on them as soon as is possible. You will need them, and they will make your life much easier.

The policies around research papers are pretty simple and straightforward compared to those around research data, and the potential payoff from data availability is probably much higher. It’s an incredibly complex undertaking with issues involving intellectual property, informed consent and patient confidentiality. Many researchers are extremely hesitant if not downright hostile toward releasing their data. Last week the New England Journal of Medicine featured a letter from 300 medical researchers asking for a slowing of these requirements.

In short, many funding agencies have policies that require, or at least encourage the release of data from funded studies. Data availability offers a really interesting opportunity for publishers. There are a lot of interesting ways that publishers could serve the data needs of the research community which opens up a slate of new business opportunities.

One of the big benefits of open data, beyond its reuse for new experiments, is adding transparency, and hopefully better reproducibility to the literature. Right now most papers are very light on detail, materials and methods sections are tiny, if they exist at all, and usually, “typical results are shown”.

This is helpful because we’re increasingly seeing questions about how reliable the scholarly literature really is. Estimates range from “very reliable” to “not at all”. Many of these claims are extremely sensationalistic, based on little evidence, or things like economists trying to understand cancer cell biology, so the true reliability of the literature is really unclear. You may have seen a recent study from the Reproducibility Project that only 39% of psychology studies they tested were found to be reproducible, which was followed by a study from a group at Harvard showing that those reproducibility efforts were themselves, not reproducible and that most of the original studies in question were actually valid. Much of the confusion here stems from how we define “reproducible”. Do we mean that the conclusions of the study hold up when tested in different ways? Do we mean that I will reach the same conclusion if I test the same question? Do we mean that I can exactly replicate their results if I follow their exact experimental protocol?

More to the point, what can journals do to drive better reproducibility? Do we need to change our standards for statistical significance? Should we be doing more statistical reviewing of manuscripts? Are we part of the problem because we don’t publish detailed experimental protocols? Can we improve reproducibility by helping to make the data behind studies available?

Better data availability will help drive reproducibility, and hopefully reduce the next subject on our minds, Misconduct. Why are we seeing such a rise in retractions across the literature? Are more and more people either being sloppy or cheating? Or is there just better scrutiny these days, and better technology for detection of misconduct?

Certainly digital publishing offers a host of new scams including citation rings among journals looking to boost their impact factors, the use of fake peer reviewers (or fake email addresses for real peer reviewers), or even, as we see with predatory publishers, not bothering with peer review at all. Though we’re 20 years into our digital transition, we’re still in many ways in the Wild West, with much to still be sorted out.

That digital environment brings many benefits which we’re still discovering, but also a dark side of many problems which also continue to arise. The move away from the physical object of the article to an instantly copied and distributed bit of digital code has many repercussions.

On the plus side, it massively improves the spread of information, and that’s why we do what we do in the first place. Article sharing, if done in a responsible and fair manner is becoming codified with best practices emerging. Where journals used to just turn a blind eye toward authors emailing someone a PDF of an article, now they are explicitly stating that private sharing, among colleagues or research groups, is perfectly fine with journal policies. Again, a shift away from rigid control of content.

There’s been a real effort to delineate the different versions of an article that exist, the Author’s Original Version, basically the preprint version before it was submitted to the journal, the Accepted Manuscript version, the article after it’s been peer reviewed and at the point where it was accepted by the journal, and the Version of Record, the final, edited and typeset published version. What you can legally do with each version, and when you can do it, is in the course of being defined.

Before we travel to the dark side, we need to pass through a grey area, that of scholarly collaboration networks, like ResearchGate and Academia.edu. It is important to recognize what these are: privately-owned, for-profit, venture-capital backed business ventures. Don’t be fooled by that .edu name, the domain was actually purchased from someone who held it before they started restricting .edu domains to actual educational institutions.

These are commercial organizations, with business models that range from selling ads to spying on researchers and selling data on what they’re reading and talking about to anyone willing to pay for it. They are both backed by tens of millions of investment dollars, but neither has shown any sort of viable business model as of yet. It’s hard to get by on ad dollars unless you’re the size of Google or Facebook, and there aren’t that many researchers on earth. It’s also unclear if anyone wants or is willing to pay for all that data.

Further, it’s not really clear that a significant proportion of the research community is using either site for any sort of social interaction, other than what has become the mainstay of activity for both sites, the quasi-legal downloading of research papers. There continue to be allegations that these networks are greatly infringing copyright as a means of driving traffic and activity.

The sites remind me of YouTube in its early days, which was filled with illegal content, and faced years of costly lawsuits from content companies until eventually realizing that it was best for all to work together in their mutual interest, to actively filter uploads for copyright violations, to sign licensing agreements and to share revenue with copyright holders. The questions here are whether these sites have the staying power of YouTube, and whether we can reach that point of mutually beneficial cooperativity without going through all the costly litigation.

Now we come to the dark side and flagrant piracy, which seems an unavoidable and unfortunate part of the digital environment. You’re all probably aware of Sci-Hub, the website accused of both copyright infringement and criminal hacking and stealing of passwords. Scholarly publishing has come somewhat late to the world of organized theft in this manner, but given that every other media has had to deal with it, it’s perhaps not surprising.

In many ways, piracy is just part of the digital landscape, a cost of doing business in a digital environment. But there is much that can be done to minimize the damage. Legal cases against Sci-Hub continue, which make it harder to find online hosts for the website, and make the life of the people behind it increasingly difficult, and while legal actions may make the site harder to find, it is unlikely that they will make it go away altogether.

One valuable lesson from Sci-Hub is an industry-wide recognition that our security and authentication systems are no longer fit for purpose. Journal access is based on IP range, which is a technology that has long been abandoned by nearly every other company doing commerce on the internet. It is time for journals to catch up with Facebook, Google and the rest of the world and move to more secure systems like multifactor authentication. This will make it harder for pirates to gain illegal access to journals, and make it easier to track and shut down any security breaches.

Just as important are approaches that make the use of pirated materials less attractive. A few studies have shown that a good deal of traffic to Sci-Hub comes from researchers who have legal access to the journals, but see Sci-Hub as an easier way to get to the papers. The user experience at many universities is miserable, particularly for trying to gain access when one is away from campus or using a mobile device like a phone or a tablet. We need to do a better job of easing the work needed to get into our journals, to get that pathway down to the point where legal access is easier than piracy.

These are both big undertakings, particularly because so many of the systems in use are not under the control of publishers. Librarians and university IT directors need to be brought on board and need to upgrade their systems as well. They need to be clear on the dangers they currently face, both the legal liability for violating the contracts they’ve signed with publishers, and more importantly the huge security holes currently existing in their systems. Once a criminal gains access to a university’s systems to access journals, they very often have access to many other things including financial records and payment systems, medical records, student records and grading systems.

This is an enormous threat to universities and more and more are gaining recognition of the problem and starting on the path to better technologies. Expect to hear much more about this shift over the next few years.

I could go on for a few more hours but want to stop here because I think that last point, turning a piracy threat into an industry-wide effort to improve the quality of services offered, does a nice job of summing up our current lives as publishers. We’re venturing into unknown territories which come with both dangers and opportunities. You can go the Chicken Little route and see everything in terms of doom and gloom, or you can rise to the challenge of navigating uncharted waters and turn threats into new services and assets.

These are, without a doubt, “interesting times.”

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

20 Thoughts on "A Quick Tour Around the World of Scholarly Journal Publishing"

David, thank you for posting such a thorough review of the issues we face in the publishing industry. It beautifully captures what a complex system we are working within.

An excellent encapsulation of the present state in academic publishing!

I would maintain that the biggest threat to the status quo is the ‘consolidation phase’ that you describe, as this implies reduced competition, price gouging and an increase in anti-trust abuses. I agree that there is no reason to be pessimistic, as technological innovation and the entrepreneurial spirit will continue to challenge the behemoths. I predict that the academic community itself (who are both the content generators and users) will play an ever greater role in determining the future of the industry.

Regarding this statement, David, I see things rather differently: “CHORUS, the system built to provide public access to US federally funded research, is an example here, where compliance is handled automatically, without direct intervention by the author, funder or publisher.”

I track the federal program via my newsletter “Inside Public Access” so this is something I have written about frequently. I am a big fan of CHORUS but in fact providing funder metadata has proven to be labor intensive on the part of its member publishers. Also, last I knew the few agencies that presently use CHORUS also require that the authors directly submit accepted manuscripts to the agency, to populate the agency’s dark archive. So as I see it nothing is automated. But then you are on the inside with CHORUS, while I am on the outside, so maybe you can correct me.

It was meant to be illustrative of the kinds of things that can be done with automation rather than to reflect any individual funder’s policy. Different groups put different requirements on their fundees, but any of these things can be automated if the desire (and business reasons) are there. Providing metadata for CHORUS requires some setup work, but once in place is not an ongoing burden.

I think it is still very much an ongoing burden. For example, the CHORUS publisher implementation guide says this, in part:

“Studies by already implemented member publishers have found significant error rates in author metadata collected via peer review system UI, primarily omissions. Those error rates were reduced by improving the author instructions to emphasize that the funding information should match the acknowledgments text. The other workflow solution is to make this a production operation rather than an author task: have the funding information extracted from the acknowledgments text and mapped to the Open Funder Registry, verified during copy editing, and then the proposed funding metadata submitted to the author for confirmation at the proof stage.”

From http://www.chorusaccess.org/wp-content/uploads/CHORUS-Publisher-Implementation-Guide-2-1a-040516.pdf, page 7

This is all hand work and I think most members have opted for mining the acknowledgements. An artificial intelligence solution would be most helpful and the Optical Society has done some work in this direction but it is still a major challenge.

There are continuing efforts to improve the quality of data derived through the CrossRef Open Funder Registry. How involved in these efforts one is varies from publisher to publisher, and most note somewhere in the process that the ultimate responsibility lies with the author of the paper–if they fail to properly identify their funding resources, the publisher is not responsible for any failure of compliance. How far one goes to help out those authors who mess up is up to each publisher. For example, at the page proof stage, we include a statement to the authors that they have stated their funding sources as X, and ask them to confirm that this is correct. This is done automatically on our end, and no further effort is needed unless they’ve messed up, then it’s about the same amount of work as we do for any correction at this stage.

You’re also mixing in here some of the information offered for publishers who don’t have an electronic submission system, or who are unwilling or unable to integrate funder information collection into the process in a standard way. Some, like the Optical Society, have put together their own automated system for sourcing this from the article text, others pull this by hand. Effort involved varies, but to get back to my original point, if you’re integrating PIDs and using them to their fullest, you can automate a lot of these time-consuming processes.

Excellent overview, but surprising in its omission of “predatory” journal publishers as a problem. The discussion of reproducibility brings to mind that this is one criterion that applies to both the natural and social sciences but has no bearing, as far as I can see, on the humanities. What would it even mean to talk about “reproducible results” for work in the humanities?

I mention them, but I’m not convinced that predatory journals are a long term threat. They’re a result, in some ways, of moving to digital platforms and new business models and I’ve yet to see any convincing data on whether they are actually fooling a lot of legitimate researchers, or if they serve more as the “journal of last resort” for cranks.

If anything, as researcher awareness continues to improve, they should play less and less of a role and activity will likely fade (personally, I’m flummoxed by the idea of publishing in a journal that you don’t actively read). The real danger that they pose is more in the court of public opinion, and if journalists fail to understand the difference between real journals and fake journals, then some damage could be done to the public’s faith in science as fake studies could get some popular coverage.

Good question on whether the term “reproducible” is even relevant in the Humanities, which is often more argument-based.

’ve yet to see any convincing data on whether they are actually fooling a lot of legitimate researchers, or if they serve more as the “journal of last resort” for cranks.

I continue to believe that the most troubling danger these publishers pose is not to authors (who, I agree, are not likely to be fooled) but rather to the institutions that hire authors as faculty members, and who are not in a position to look up every article listed on every applicant’s CV to see what percentage of them represent legitimate publications and what percentage is predatory padding. It’s pretty easy to recognize the website of a predatory journal; it’s much less easy to recognize a citation to a predatory journal, especially when that citation is one of ten (or even twenty of forty). In other words, I think it’s not so much about predatory publishers fooling authors as it is about predatory publishers helping authors to fool their colleagues.

As I’ve said before, a predatory publisher is like a diploma mill for articles.

I would hope that any hiring at a university would involve having a candidate evaluated by someone knowledgeable in their field (either someone at the university itself or an expert brought in from outside). That person should be able to recognize any weird journals on a CV and hopefully raise some red flags.

All candidates are evaluated by people knowledgeable in the field, of course. The question is what kind and level of evaluation is possible when you have, say, 60 candidates, each with a CV listing tens or scores of publications (or more). In some disciplines, a more typical candidate pool will number in the hundreds.

Of course, the more you narrow your pool the more possible it becomes to scrutinize every publication. At some point you’re going to have only three or four finalists, so at that point you may be dealing with only somewhere between, say, 80 and 300 publications in total. Is the search committee going to research the publisher of every article? (Or even every article that isn’t from a journal that the committee members immediately recognize as legitimate?) Maybe they should. I promise you they won’t, and in that gap, I think, lies the real money-making opportunity for predatory publishing.

At some point, somebody has to read the articles. That point may be after you’ve narrowed down 500 candidates to 5, but if no one has read the actual papers before the hiring takes place, then there’s been some serious dereliction of duty.

In your experience, does each member of an academic search committee generally read, say, 300 articles in the course of selecting a final candidate? That’s not been my experience, but maybe we’re unusually derelict in academic libraries. . .

No, but once you’re down to 30 candidates, looking at their publication record isn’t a huge burden, then when you’re down to 5 final candidates, reading 5 papers from each doesn’t seem too crazy.

Totally agree. But how do you select the 5 articles when they’ve listed 50? Sure, you can rule out the citations to Nature and NEJM, but the citation to an article in the bogus Histology Letters B is going to look an awful lot like the legitimate citations around it.

I’m not saying these guys fool everyone all the time, only that they have a reasonable expectation of being able to fool some of the people some of the time. And when they do get caught and called out, they can simply disappear and reappear two weeks later with a different journal under a different publisher name.

You make the applicant pick their top 5. Tell me what you think is most representative of your work.

This is an excellent overview, David. I would like to contextualize this statement: “Scholarly publishing has moved into the digital age more gracefully and more successfully than nearly any other media. Compare where we are with the current state of newspapers or magazines. They’re struggling for their very survival while we’re expanding and experimenting with new business models, new technologies and new ways to better fulfill our mission.” I don’t disagree, but the different sources of funding for these two types of content — newspapers/magazines vis-a-vis scholarly materials — is crucial. News sources always relied on individual subscriptions supplemented by advertisers, so that the reader of the material always paid (at least to some extent) for what they read. There was a direct connection between reader and producer. Once that moved online we learned that many readers were satisfied to obtain their news for free — both because reputable news providers gave their online content away for free for too long, and because unscrupulous aggregators and click-bait factories rapidly flooded the zone. With scholarly content, of course, academic librarians serve as the intermediaries that insulate readers from all costs associated with obtaining scholarly material. This has not changed at all in the online world. And as a service-oriented profession, the librarian’s highest duty is to obtain what their users want. These are tremendous advantages for scholarly publishers, relatives to news producers.

I agree with your thoughts on this distinction, but to me, a key strategic advantage was going online and immediately charging customers, rather than putting things up for free and assuming you’d pay for it all with ads. This set the tone, and really enforced the notion that everything should be free. It’s a lot easier to start with charging for something and then lessening that over time than it is to make people used to getting something for free start paying for it.

Comments are closed.