A picture of an barometer pointing to "change"
Photo from iStockphoto.

(Joe Esposito Notes: Michael Clarke’s piece on disruption in scientific publishing is to my mind the most incisive post yet to appear on the Scholarly Kitchen.  The key point Michael makes is that for all the talk of disruption, scientific publishing in fact has not been disrupted.  There is the appearance of disruption (e.g., most journals are now electronic), but the business continues to proceed pretty much as it has for a couple decades or more.  Michael goes on to explain why the Great Disruption has not taken place, and locates the reasons in a series of network externalities.  it is worth noting that advocates of open access, focusing purely on access and ignoring the externalities that Michael identifies, seem to be unaware that there is much more to scholarly communications than the ability to read a text.)

Looking back on 2009, there was one particular note that seemed to sound repeatedly, resonating through the professional discourse at conferences and in posts throughout the blogosphere: the likelihood of disruptive change afoot in the scientific publishing industry.

Here in the digital pages of the Scholarly Kitchen, for example, we covered John Wilbanks’ presentation at SSP IN and Michael Nielsen’s talk at the 2009 STM Conference. They were both thoughtful presentations and I agree with many of the points raised by both speakers. I think Wilbanks is right when he says that thinking of information in terms of specific containers (e.g. books, journals, etc.) presents an opening to organizations in adjacent spaces who are able to innovate without the constraints of existing formats. I also agree with Nielsen’s point that acquiring expertise in information technology (and especially semantic technology)—as opposed to production technology—is of critical importance to scientific publishers and that those publishers who do not acquire such expertise will fall increasing behind those organizations that do.

It has occurred to me, however, that I would likely have agreed with arguments that scientific publishing was about to be disrupted a decade ago—or even earlier.  That we are speculating on the possibility of the disruption (here were are talking of “disruption” in the sense described by Clay Christensen in his seminal book The Innovator’s Dilemma) of scientific publishing in 2010 is nothing short of remarkable.

Lest we forget (and this is an easy thing to do from the vantage of the second the decade of the 21st century), the World Wide Web was not built for the dissemination of pornography, the sale of trade books, the illegal sharing of music files, dating, trading stocks, reading the news, telecommunications, or tracking down your high school girlfriend or boyfriend. As it turns out, the Web is particularly good for all these activities, but these were not its intended uses.

When Tim Berners-Lee created the Web in 1991, it was with the aim of better facilitating scientific communication and the dissemination of scientific research. Put another way, the Web was designed to disrupt scientific publishing. It was not designed to disrupt bookstores, telecommunications, matchmaking services, newspapers, pornography, stock trading, music distribution, or a great many other industries.

And yet it has.

It is breathtaking to look back over the events of the last 18 years since the birth of the Web. It has grown from an unformed infant, to a promising adolescent, to a sometimes-unruly teenager. In that time we have witnessed vast swaths of the global economy reconfigured as new industries emerged and old industries were upended. New modes of communication have transformed the workplace—and the home lives—of hundreds of millions of people. From the vantage of 1991, it would have been impossible to predict all that has happened in the last 18 years. No one would have believed that much could change that quickly.

And yet it has.

The one thing that one could have reasonably predicted in 1991, however, was that scientific communication—and the publishing industry that supports the dissemination of scientific research—would radically change over the next couple decades.

And yet it has not.

To be sure, many things have changed. Nearly all scientific journals (and an increasing number of books) are now available online. Reference lists are interconnected via digital object identifiers (DOIs). Vast databases such as Genbank and SciFinder have aggregated and parsed the structures of millions of biological and chemical sequences and structures. Published research is more accessible than ever via search tools such as Google ScholarPubMed, and Scopus. New business models, such as open access and site licensing, have emerged. And new types of communication vehicles have emerged such as the preprint server ArXiv, video journals such as JoVE and the Video Journal of Orthopaedics, and online networks such as Nature NetworkMendeley, and (most recently) UniPHY—to name just a few innovations. To be sure, scientific publishers have not ignored the Web. They have innovated. They have experimented. They have adapted. But it has been incremental change—not the disruptive change one would have predicted 18 years ago.

Looking back at the publishing landscape in 1991, it does not look dramatically different from today, at least in terms of the major publishers. The industry has been relatively stable. And one would be hard pressed to characterize the number of mergers and acquisitions that have occurred as particularly numerous relative to other industries. Moreover, these mergers and acquisitions are more likely to be explained by the rise of private equity and the availability of cheap capital than by technological innovations related to publishing.

The question then becomes, not whether scientific publishing will be disrupted, but rather why hasn’t it been disrupted already?

In examining the reason for this surprising industry stability, I think it is useful to start by looking closely at the functions that journals—still the primary vehicles for the formal communication of research—serve in the scientific community. Why were journals invented in the first place? What accounts for their remarkable longevity? What problems do they solve and how might those same problems be solved more effectively using new technologies?

Initially, journals were developed to solve two problems: Dissemination and registration.

Dissemination. Scientific journals were first and foremost the solution to the logistical problem of disseminating the descriptions and findings of scientific inquiry. Prior to 1665, when both the Journal des sçavans and the Philosophical Transactions were first published, scientists communicated largely by passing letters between each other. By 1665, however, there were too many scientists (or, more accurately, there were too many educated gentlemen with an interest, and in some cases even an expertise, in “natural philosophy”) for this method to be practical. The solution was to ask all such scientists to mail their letters to a single person (such as, in the case of the Philosophical TransactionsHenry Oldenburg) who would then typeset, print, and bind the letters into a new thing called a journal, mailing out copies to all the other (subscribing) scientists at once.

While the journal was a brilliant solution to the dissemination problems of the 17th century, I think it is safe to say that dissemination is no longer a problem that requires journals. The Internet and the World Wide Web allow anyone with access (including, increasingly, mobile access) to the Web to view any page designated for public display (we will leave aside the issue of pay walls in this discussion). If dissemination were the only function served by journals, journals would have long since vanished in favor of blogs, pre-print servers (e.g. ArXiv), or other document aggregations systems (e.g. Scribd).

Registration. Registration of discovery—that it to say, publicly claiming credit for a discovery—was, like dissemination, an early function of journal publishing. Ironically, the Philosophical Transactions was launched just in time to avert the most notorious scientific dispute in history—and failed to do so. The Calculus Wars were largely a result of Newton, who developed his calculus by 1666, failing to avail himself of Oldenburg’s new publication vehicle. By the time the wars ended in 1723, Newton and Leibniz can be credited with doing more to promote the need for registration than any other individuals before or since. Oldenburg could not have scripted a better marketing campaign for his invention.

As enduring as journals have been as a mechanism for registration of discovery, they are no longer needed for this purpose. A preprint server that records the time and date of manuscript submission can provide a mechanism for registration that is just as effective as journal publication. Moreover, by registering a DOI for all manuscripts an additional record is created that can further validate the date of submission and discourage the possibility of tampering.

While journals are no longer needed for the initial problems they set out to solve (dissemination and registration), there are 3 additional functions that journals serve that have developed over time. These later functions—comprising validation (or peer review), filtration, and designation—are more difficult to replicate through other means.

Validation. Peer review, at least in the sense most journals practice it today, was not a common function of early scientific journals. While journal editors reviewed submitted works, the practice of sending manuscripts to experts outside of the journal’s editorial offices for review was not routine until the last half of the 20th century. Despite the relatively late provenance of peer review, it has become a core function of today’s journal publishing system—indeed some would argue its entire raison d’etre.

Schemes have been proposed over the years for decoupling peer review from journal publishing, Harold Varmus’ “E-Biomed” being perhaps the most well-known example. There have additionally been several experiments in post-publication peer review—whereby review occurs after publication—though in such cases, journal publication is still attached to peer review, simply at a different point in the publication process. To date, no one has succeeded in developing a literature peer-review system independent of journal publication. One could imagine a simple online dissemination system, like ArXiv, coupled with peer review. And indeed one could make the case that this is precisely what PLoS One is, though PLoS considers PLoS One to be a journal. It is perhaps not an important distinction once one factors out printed issues, which I don’t think anyone would argue are central to the definition of a journal today.

Filtration. In 1665 it was fairly easy to keep up with one’s scientific reading—it required only 2 subscriptions. Over the last few centuries, however, the task has become somewhat more complicated. In 2009 the number of peer-reviewed scientific  journals is likely over 10 thousand with a total annual output exceeding 1 million papers (both Michael Mabe and Carol Tenopir have estimated the number of peer-reviewed scholarly journals between 22,000 and 25,000, with STM titles being a subset of this total). Keeping up with papers in one’s discipline, never mind for the whole of science, is a challenge. Journals provide important mechanisms for filtering this vast sea of information.

First, with the exception of a few multi-disciplinary publications like NatureScience, and PNAS, the vast majority of journals specialize in a particular discipline (microbiology, neuroscience, pediatrics, etc.). New journals tend to develop when there is a branching of a discipline and enough research is being done to justify an even more specialized publication. In this way, journals tend to support a particular community of researchers and help them keep track of what is being published in their field or, of equal importance, in adjacent fields.

Second, the reputations of journals are used as an indicator of the importance to a field of the work published therein. Some specialties hold dozens of journals—too many for anyone to possibly read. Over time, however, each field develops a hierarchy of titles. The impact factor is often used as a method for establishing this hierarchy, though other less quantitative criteria also come into play. This hierarchy allows a researcher to keep track of the journals in her subspecialty, the top few journals in her field, and a very few generalist publications, thereby reasonably keeping up with the research that is relevant to her work. Recommendations from colleagues, conferences, science news, and topic-specific searches using tools such as Google Scholar or PubMed, might fill in the rest of a researcher’s reading list.

Still, filtration via journal leaves a lot of reading on behalf of scientists. This has prompted a number of developments over the years, from informal journal clubs to review journals to publications like Journal Watch that summarize key articles from various specialties. Most recently, Faculty of 1000 has attempted to provide an online article rating service to help readers with the growing information overload. These are all welcome developments and provide scientists with additional filtration tools. However, they themselves also rely on the filtration provided by journals.

Journal clubs, Journal Watch, and Faculty of 1000 all rely on editors (formally or informally defined) to scan a discipline that is defined by a set of journals. Moreover, each tool tends to weight its selection towards the top of the journal hierarchy for a given discipline. None of these tools therefore replace the filtration function of journals—they simply act as a finer screen. While there is the possibility that recent semantic technologies will be able to provide increasingly sophisticated filtering capabilities, these technologies are largely predicated on journal publishers providing semantic context to the content they publish. In other words, as more sophisticated filtering systems are developed—they tend to augment, not disrupt, the existing journal publication system.

Designation. The last function served by scientific journals, and perhaps the hardest to replicate through other means, is that of designation. By this I mean that many academic institutions (and other research organizations) rely, to a not insignificant degree, on a scientists’ publication record in career advancement decisions. Moreover, a scientists’ publication record factors into award decisions by research funding organizations. Career advancement and funding prospects are directly related to the prestige of the journals in which a scientist publishes. As such a large portion of the edifice of scientific advancement is built upon publication records, an alternative would need to be developed and firmly installed before dismantling the current structure. At this point, there are no viable alternatives—or even credible experiments—in development.

There are some experiments that seek to challenge the primacy of the impact factor with the aim of shifting emphasis to article-centric (as opposed to journal-centric) metrics. Were such metrics to become widely accepted, journals would, over time, cease to carry as much weight in advancement and funding decisions. Weighting would shift to criteria associated with an article itself, independent of publication venue. Any such transition, however, would likely be measured not in years but in decades.

The original problems that journals set out to solve—dissemination and registration—can indeed be handled more efficiently with current technology. However, journals have, since the time of Oldenburg, developed additional functions that support the scientific community—namely validation, filtration, and designation. It is these later functions that are not so easily replaced. And it is by closely looking at these functions that an explanation emerges to explain why scientific publishing has not been disrupted by new technology as yet: these are not technology-driven functions.

Peer review is not going to be substantively disrupted by new technology (indeed, nearly every STM publisher employs an online submission and peer-review system already). Filtration may be improved by technology, but such improvements are likely to take the form of augmentative, not disruptive, developments. Designation is firmly rooted in the culture of science and is also not prone to technology-driven disruption. Article-level metrics would first have to become widely adopted, standardized, and accepted, before any such transition could be contemplated—and even then, given the amount of time that would be required to transition to a new system, any change would likely be incremental rather than disruptive.

Given these 3 deeply entrenched cultural functions, I do not think that scientific publishing will be disrupted anytime in the foreseeable future. That being said, I do think that new technologies are opening the door for entirely new products and services built on top of—and adjacent to—the existing scientific publishing system:

  • Semantic technologies are powering new professional applications (e.g. ChemSpider) that more efficiently deliver information to scientists. They are also beginning to power more effective search tools (such as Wolfram Alpha) meaning researchers will spend less time looking for the information they need.
  • Mobile technologies are enabling the ability to access information anywhere. Combined with GPS systems and cameras, Web enabled mobile devices have the potential to transform our interaction with the world. As I have described recently in the Scholarly Kitchen, layering data on real-world objects is an enormous opportunity for scientists and the disseminators of scientific information. The merger of the Web and the physical world could very well turn out to be the next decade’s most significant contribution to scientific communication.
  • Open data standards being developed now will allow for greater interoperability between data sets, leading to new data-driven scientific tools and applications. Moreoever, open data standards will lead to the ability to ask entirely new questions. As Tim Berners-Lee’s pointed out in his impassioned talk at TED last year, search engines with popularity-weighted algorithms (e.g. Google, Bing) are most helpful when one is asking a question that many other people have already asked. Interoperable, linked data will allow for the interrogation of scientific information in entirely new ways.

These new technologies, along with others not even yet imagined, will undoubtedly transform the landscape of scientific communication in the decade to come. But I think the core publishing system that undergirds so much of the culture of science will remain largely intact. That being said, these new technologies—and the products and services derived from them—may shift the locus of economic value in scientific publishing.

Scientific journals provide a relatively healthy revenue stream to a number of commercial and not-for-profit organizations. While some may question the prices charged by some publishers, Don King and Carol Tenopir have shown that the cost of journals is small relative to the cost, as measured in the time of researchers, of reading and otherwise searching for information (to say nothing of the time spent conducting research and writing papers). Which is to say that the value to an institution of workflow applications powered by semantic and mobile technologies and interoperable linked data sets may exceed that of scientific journals. If such applications can save researchers (and other professionals that require access to scientific information) significant amounts of time, their institutions will be willing to pay for that time savings and its concatenate increase in productivity.

New products and services that support scientists through the more effective delivery of information may compete for finite institutional funds. And if institutions designate more funds over time to these new products and services, there may be profound consequences for scientific publishers. While it will not likely result in a market disruption as scientific journals will remain necessary, it will nonetheless create a downward pressure on journal (and book/ebook) pricing. This could, in turn, lead to a future where traditional knowledge products, while still necessary, provide much smaller revenue streams to their publishers. And potentially a future in which the communication products with the highest margins are not created by publishers but rather by new market entrants with expertise in emerging technologies.

The next decade is likely to bring more change to scientific publishing than the decade that just ended. However, it will likely continue to be incremental change that builds on the existing infrastructure rather than destroying it. It will be change that puts pressure on publishers to become even more innovative in the face of declining margins on traditional knowledge products. It will be change that requires new expertise and new approaches to both customers and business models. Despite these challenges, it will be change that improves science, improves publishing, and improves the world we live in.

Michael Clarke

Michael Clarke

Michael Clarke is the Managing Partner at Clarke & Esposito, a boutique consulting firm focused on strategic issues related to professional and academic publishing and information services.

Discussion

19 Thoughts on "Stick to Your Ribs: Why Hasn't Scientific Publishing Been Disrupted Already?"

I can agree with your process and conclusions, but your reference [“some may question”] to a Wikipedia article, “Serials crisis” make me choke. Like the 1989 “ARL Serials Prices Project” report, the crown jewel of which were conclusions based on an unsigned economic consultant report, the Wiki-piece is heavily slanted in favor of university management who wish to blame authors and publishers for the input-output disparities of their own science policy spending.

I really enjoyed reading this very well written overview. Yet, I am not convinced that its portrayal of the likely future is accurate. Like the Maginot line that looks to the past in order to defend against the future, it errs in anticipating the forces that can and probably will undo it.
The advent of ePublishing technologies is a necessary but not sufficient condition. Thus, focusing on technology alone is confounding.
The more probable engine of disruption is economic and focused on journal consumers, primarily colleges and universities. Not even Harvard can keep up with the costs. The next-to-last paragraph comes close to envisioning this possibility but relegates that scenario to the unlikely.
The direct and indirect costs of scholarly publications are ultimately passed on to students. Textbooks and other academic books are a direct levy and journals are indirect. Both are driven by professor assignment. Both contribute substantially to the price tag of a college degree and that cost is rising much faster than the Consumer Price Index.
This is where disrupting pressure is being introduced. That pressure will circumvent the Maginot line of cultural barriers to radical change, the filtering and rating of academic work, the promotion and tenure system, etc. Note the diminished number of tenure track positions.
We already have politicians calling for the $10K degree and various accountability tactics that tie student outcomes to funding. The subsidy is drying up even now.

Thanks Frank. I’m not sure if you are agreeing or disagreeing when you say that “focusing on technology alone is confounding.” I agree completely and the point the post makes is that it is cultural, not technological, functions of journals that have acted as a bulwark against disruption.

I do disagree with your statement that “not even Harvard can keep up with costs.” Journal subscriptions constitute a rounding error in Harvard’s annual budget – less than half of a percent last time I looked. The university likely spends far more on groundskeeping. I agree that the cost of tuition at many universities is out of control, but the subscription fees of journals, at least for research universities, are not a significant contributing factor.

I’m not sure the metaphor of the Maginot Line works in this context as it was a technological barrier which was overcome by advances in technology (improvements in tanks and aircraft). I take your point that looking to the past is not predictive of the future. The point I’m making, however, is that the cultural barriers have prevented journal publishing from being disrupted for over 20 years and cultural barriers are not as susceptible to technological disruption as technological barriers are.

That is not to say that emerging changes in the way research is assessed and validated will not cause substantive change — they very well may. However, the cultural barriers will likely slow the rate of that change to the point that it will be incremental and evolutionary — not revolutionary and disruptive — and will be co-opted by established players. This is how things have played out so far. Mendeley is now owned by a traditional publisher. PLoS publishes journals that are by-and-large traditional except they call their page charges “article processing fees” and don’t levy a subscription. [It should be noted here that the reason commercial publishing flourished immediately following WWII was that societies, which published most journals up to that point in time, levied relatively large page charges while charging very little in way of subscriptions. As researchers had very little money to spend on publication fees, especially in Europe and Japan, at that point, the commercial houses reduced or eliminated page charges and instead levied higher subscription rates on libraries. PLoS has essentially gone back to a pre-WWII society business model.] ArXive has augmented physics publishing, not disrupted it. And so on.

Some numbers for Harvard to be found here:
http://vpf-web.harvard.edu/annualfinancial/
The 2012 financial year saw operating revenue of $4.0 billion (revenue from students alone was $777 million).
The most recent number I can find for Harvard’s research journal budget is from 2010, and was then some $3.75 million.

Barring an enormous increase between then and 2011-12, that would put it at 0.09375% of total operating revenue. Note that some $19,728,000 was spent that year on “Advertising”.

Thank you Michael for your lucid arguments. If we agree that “journals are no longer needed for the initial problems they set out to solve (dissemination and registration)” we are left with “validation”, “filtration” and “designation”.

As you point out “validation” can be decoupled from journal publishing if and when it is coupled with public preprint servers like arXiv, peerJ, biorxiv, figshare, etc. I think that here we should mention a recent trend in scholarly communication that we could call journal-independent peer review, but that has also been termed “portable” and “crowdsourced”. A series of independent peer review platforms with different flavours already exist and they all enable the formal evaluation of manuscripts that have not been published by an academic journal yet (we can mention peerevaluation.org, peerageofscience.org, publons.com, science-open-reviewed.com and the forthcoming libreapp.org). Your only observation, rather than argument, here is that “to date, no one has succeeded in developing a literature peer-review system independent of journal publication”. Given the little time that these new initiatives have been around I think we should wait a little longer before we reach any safe conclusion. And if we also agree that peer review “has become a core function of today’s journal publishing system—indeed some would argue its entire raison d’etre”, then maybe we should anticipate that the independent peer review movement can bring an even greater disruption to scientific publishing than the open access movement has brought.

Filtration is important and indeed journals play a significant role in it. But advancement in that area is not entirely non-technological as you argue. Open bibliographies collectively built and sorted by the community (libreapp.org will include this feature and mendeley is already doing something similar with great success), advanced search and filter option in large online databases, open annotations and advanced database interoperability can significantly improve the retrieval of relevant research. However, as I said, journals are important here and I would expect that “filtration” will largely continue to depend on journals in the future scholarly communication landscape.

Designation as you say is the journal function that is the “hardest to replicate through other means”. This is true if we continue to think in terms of closed, anonymous, journal-handled peer review where all the qualitative information included in the referee reports is reduced to a simple “yes” or “no” decision. Open, journal-independent peer review, will bring to light this information that can be also quantified as ratings under specific categories (e.g. importance, methodology, clarity, overall quality, etc). I understand why there are doubts on whether we should trust altmetrics, such as downloads and tweets, but I am certain that the research community, and eventually University committees and funding agencies, will pay serious attention to the reports by known (as in non-anonymous) experts who have openly assessed the work in question.

For these reasons, I believe that journals still have an important future in scholarly communication for filtering and further disseminating already published research —”published” as in “made public” through free online preprint servers and institutional repositories. Importantly, when they assume and focus on these two crucial functions they will inevitably have to reduce subscription or publication fees that can only be justified by their current power for validation and designation of scientific research.

It’s probably worth noting that this is a re-visiting of Michael’s post from January of 2010, and as you point out, new ventures have certainly come onto the scene since then.

Pandelis – I agree that the recent experiments in alternatives to journal validation are interesting. The question will be whether they act as truly independent review mechanisms for papers that are deposited in open archives (e.g. ArXive, institutional repositories, PeerJ’s preprint server, etc.) or as feeder systems to traditional journals.

The recent DORA initiative is also worth noting in context of designation. If more tenure and grant award committees do start weighing non-journal metrics more heavily, this could lead to substantive changes. Oddly, the list of signatories at this point seems to include more publishers and societies than universities (and it is far from clear whether those signing from universities are speaking for the whole of their universities as opposed to their own departments and labs).

“Decoupling peer review from the journal” is an interesting idea. It could also be expressed as “disintermediating the journal editor” because crowd sourcing initiatives require reviewers to self-select, self-motivate, and self-quality control. The question is: has technology advanced sufficiently to be an effective substitute for the role of the journal editor?

We should not underestimate the depth of knowledge and commitment of scholarly journal editors.

Software should be used to empower editors, not substitute them.

If [applications powered by semantic and mobile technologies and interoperable linked data sets] can save researchers (and other professionals that require access to scientific information) significant amounts of time, their institutions will be willing to pay for that time savings and its concatenate increase in productivity.

So one would think. However, given those institutions’ manifest unwillingness to underwrite the rising cost of journals now (cf. all the scores of articles on the “serials crisis”), I’m not sure we can comfortably assume that they will be willing to underwrite the cost of new tools that do the same thing, only better. The serials crisis is real, but it’s only partly a problem of aggressive price hikes; it’s also a problem of stagnant library budgets. If STM journal prices were rising at a rate of .5% per year, there would be no serials crisis; if library budgets were rising at 6-10% per year, there would also be no crisis.

Tenopir and King are right that, even at very high prices, journals generally provide good value for money. The problem is that at virtually no institution (other than outliers like Harvard) is there sufficient money in the library budget to let them take advantage of all the value their students and faculty need, and the disparity between budget and price is growing every year. That’s the crisis, and it’s real.

Maybe I am not understanding the idea of third party reviewing prior to publication. I am a publisher of a journal and have an editorial board and an editor in chief. Papers come in and they go to the EIC who then sends them to the appropriate board members who then sends the paper out for review. Why do I need a third party?

Additionally, why would I want to be a reviewer of a paper that has not gone through the submission process? For whom am I reviewing – that matters. Am I to serve as a filter as to whom the author should send his /her paper? What if I am wrong? Does the paper end up in a revolving door system?

What about cost? The author now pays to review and to publish!

Harvey, in my opinion third party review should not be regarded primarily as a service to journals, but to authors and science in general. The idea is to find an alternative model to journal peer review that obviously doesn’t work as well as we —authors— would like it to work. If independent peer review were to be established, however, it would also benefit journals. Imagine if articles submitted to your journal are accompanied by several reports from known experts that guarantee the article’s quality. It reduces the risk of accepting methodologically flawed papers.

Importantly, independent review should be open (instead of blind or double blind) so that reviewers receive recognition for their work. This offers clear incentives in contrast to journal-handled anonymous review where most scholars agree to review as a favor to the editor —and usually when we do favors we expect something in return!

Self-publishing, including publication to preprint servers, institutional repositories or even personal websites and blogs, peer review by free independent platforms, and dissemination through social media and other established communication channels can now be performed at no cost for authors. And it is open access to all!

Also read: New forms of open peer review will allow academics to separate scholarly evaluation from academic journals

Social factors aren’t as susceptible to disruption as technological ones, but when they go, they go all at once. Look at how access to your physical location is now a default activity built into so many smartphone apps, for example. One could easily have made a convincing argument several years ago that no one wants to walk around broadcasting exactly where they are to hundreds of private companies. Now it’s common practice to investigate smartphone records in criminal cases.

To me, this is the key thing about disruption – it can’t be predicted, least of all by the people with vested interests in maintaining the status quo. It’s a Black Swan. This article presents the fact that scholarly communication hasn’t been disrupted yet and gives some convincing post-hoc reasons why that might be, but I don’t take the relatively long period of lack of disruption in our industry compared to other publishing sectors as evidence that things will remain the same, but rather that we’re overdue for change. For those familiar with Black Swan theory, the approach to being ready for the inevitable event is to maintain resilience and flexibility, so that when the time comes, you’re ready for it and can embrace the change and profit from it. Of course, the theory also says that no amount of warnings can make people who are invested in maintaining the status quo ready to embrace change, so…

Thanks for the thoughtful comment William.

In way of a contextual note, this article is couple of years old now and was primarily written as a response to the arguments of technological determinists, such as Michael Neilsen. My point was that looking at scholarly publishing from merely a technological perspective misses a much more complex and nuanced ecosystem. And that point holds.

I agree with you that established interests are often the last to see Black Swan events. However, the example you use, of people gradually becoming accepting of their location data being beamed about willy-nilly is a gradual shift not a Black Swan (a Black Swan would be when a NSA contractor leaks information about what they are tracking and people suddenly don’t want their data being beamed about willy-nilly). Black Swans are sudden events – not gradual shifts. Gradual shifts might creep up on established organizations, but that is better described by Christensen’s disruption paradigm. Second, complex systems are the most resilient–or “antifragile” as Taleb would say–to Black Swan events. My article is a description of the resilient features of the scholarly publishing system. It is only “post-hoc” in the sense that it describes a system that already exists (only Vannevar Bush can accurately describe systems that do not yet exist). And that system is extremely resilient. It has developed evolutionarily for over 350 years with organization and indeed publication lifespans measured frequently in decades and not uncommonly in centuries. And it has withstood numerous Black Swan events thus far. The post-WWII science-industrial complex was one such Black Swan that might have overwhelmed the system. Instead the system grew by orders of magnitude in scale and complexity. The Internet, and later the Web, were other such Black Swans. Again, the system has coped.

I think it is problematic to conflate Christensen’s notion of disruption and Taleb’s notion of Black Swans (and I realize I am guilty of doing the same). They are describing different types of events. Even if we limit ourselves to a discussion of Black Swans (which I do discuss explicitly in a different Scholarly Kitchen article) we must be careful to distinguish organizational resilience from systemic resilience. Taleb also posits that the resilience of an ecosystem is often dependent on the fragility of constituent parts. His example is restaurants in cities like New York. The city has a thriving restaurant scene despite the fact that any given’s restaurant life is quite fragile given the vagaries of trends and the demographic shifts in neighborhoods (to say nothing of individual mismanagement). But other restaurants learn quickly from their mistakes and the system as a whole is resilient. I think there are some problems with Taleb’s restaurant example but the point that organizational and system resilience are different things holds. Indeed in the above example of the post-WWII science boom, not all organizations were resilient–the German publishing industry collapsed in the wake of the war and a few large commercial publishers eventually absorbed many smaller houses.

In sum, the aim of my post was to describe how cultural factors have created a systemic resilience to Black Swans and, in many cases, an organizational resilience to a Christensenian disruption model, particularly in the face of technological innovations. This does not mean that the system is resilient to all Black Swans. Nor does it mean that any given organization is resilient to Christensenian disruption. But it does explain why the frequent soundings by technological determinists that the industry is about to be disrupted have rung false.

As a response to technological determinism, I think the post is a fine response which was entirely sensible at the time. My invoking of Taleb was to make the point that lack of disruption so far really isn’t evidence of resilience, but rather more likely that disruption just hasn’t arrived yet and we need to be even more intently on the lookout for the opportunities that the future brings, rather than the other thing some less thoughtful people could have concluded from your post: everything’s fine, we won’t be disrupted, no need for any change.

I’m not sure that I agree that scholarly communication is systemically resilient, though. As the market becomes more dominated by a few large commercial publishers, it becomes ever more fragile, as systemic fragility is almost entirely derived from the organizational fragility of the few large commercial actors. (One way to look at the Mendeley acquisition is that it gives Elsevier a direct relationship with authors, instead of only with librarians, increasing Elsevier’s antifragility.) Google striking a deal with the NIH, NSF, etc to provide public access would be an example of a Black Swan that would be hugely disruptive and could probably happen extremely quickly, at least by the standards of large commercial publishers. I’m sure you’ve seen the same surveys I have on what search engine researchers go to most often. Second place is nowhere close.

The time scale of a gradual change vs. rapid disruption does depend a bit on your perspective and whether it’s Taleb or Christensen who best describes what is happening kinda becomes moot when you suddenly find you’re 2 years too late to the party.

I used to be one of the “barbarians at the gates” types and now find I’m inside the gates, so I won’t claim to see the future any more clearly than you, but I do think the longer we go, the more expectant we should be for opportunities that lie ahead, whether that’s open access, altmetrics, reproducibility, or, most likely, something totally unexpected. If you believe that it’s inevitable, which I do, then the only question is whether you embrace it or fight it, and, to borrow a little of Joe’s cynicism, I guess your stance there depends largely on how close you are to retirement.

Speaking of disruptions, some time ago, Derek de Solla Price pointed out that the numbers of physics papers published during World War II were reduced by two-thirds, but only temporarily (Science Since Babylon, enl. ed, 1975, p. 171ff.). The effects of the war also drove many scientists from Germany, and the primary lingua franca of science, which had emphasized German, changed to English.

Can today’s technical ‘revolutions’ compare?

I cannot figure out what you folks are talking about. Technological revolutions are about new products or services that are so useful that people are willing to change how they work or live in order to use them. They cannot be made to appear, nor can their appearence be delayed, generally speaking. They are never wished into being.

Scholarly publishing has certainly been hugely disrupted, as digital content has replaced print. That much is mostly done, as Joe Esposito has said. That scientific publishing will somehow vanish is the absurdity of revolutionary thinking, the usual absurdity by the way. We are now in the messy inbetween, where pontification is pointless. There is much creative work to be done, but I doubt there is much disruption to it. Sorry to be so unromantic.

Let me put it another way. People are confusing reform movements with technological revolutions, but the two phenomena are very different and proceed very differently, although both are disruptive. The confusion arises because the reform movements are piggybacking on the technological revolution. The difference is that in reform movements the goal is that people be made to change. Change is presented as the right thing to do. Technological revolutions do not typically have this feature. One can see the reform element clearly in the discussion above.

Thus the benefits of the present system of scientific publishing is really about the various reform movements, not the technological revolution. Great new products or services will be adopted without outside pressure or government action.

Comments are closed.