Library shelves
Image by Yinghai

(*with apologies to Curtis Mayfield)

Editor’s Note: Richard Fisher worked at Cambridge University Press for over thirty years, most recently as Managing Director of Academic Publishing: as an acquisitions editor, he worked largely in history and politics, publishing several hundred monographs by authors including John Pocock and Quentin Skinner. Richard now works in a non-executive capacity with various public and private organizations, including Edinburgh University Press and Yale University Press, and writes a monthly column on academic publishing for the British Independent Publishers Guild. He is also an Associate Editor of the Oxford Dictionary of National Biography, with special responsibility for sportsmen and sportswomen.

This is the second of a two-part post (part one is here), looking at  historic trends, current challenges, and future possibilities for monograph publishing in the humanities and social sciences. In particular, Richard aims to address a number of enduring misunderstandings among academic researchers and the failure of publishers to address them effectively. While many of the perspectives reflect the British experience, the posting is intended to be (at least) transatlantic in appeal.

  • Introduction: Monographic Intermediaries

I have to confess that I drafted an initial version of this posting and then tore it up. Some friendly external comments, and some of the interactions I experienced during Academic Book Week here in the UK, convinced me that I needed to think afresh and a lot more clearly about the intermediaries that currently glue academic authors, publishers, and readers together. What is undoubtedly apparent is that whilst the digital transition has occasioned significant consolidation within some parts of this under-articulated sector at the ‘confused middle’ of scholarly communication, it has also occasioned complexity and fragmentation on a heroic scale in others. The worlds of both academic bookselling and distribution, and library supply and provision, have been transformed, rather moreso, I would argue, than the core publisher proposition.

South Carolina can be a very congenial place in the fall, and one of my distinct professional regrets is that I have never been able to attend the annual Charleston Conference on Issues in Serial and Book Acquisition, not least as the existence and spectacular growth of this event (from an initial 20 participants in 1980, to over 1600 last year) tells us a great deal about the changing publishing ecology of scholarly communication. Charleston describes itself as ‘an informal annual gathering of librarians, publishers, electronic resource managers, consultants, and vendors of library materials …Presidents of companies discuss and debate with library directors, acquisitions librarians, reference librarians, serials librarians, collection development librarians, and many, many others.’ Joe Esposito has already reported back to the Scholarly Kitchen on one specific Charleston session, and a cursory glance at the program for this year suggests an astonishing range of function, job-title, institutional orientation and indeed profession amongst the speakers alone.

In the first part of this posting I stated that publisher stasis was actually the dominant aspect of the monographic landscape, and yet much monographic commentary (not least in the open access domain) centered around publisher activity and publisher functions. Far less attention had been paid to the growth (along with considerable internal consolidation) in ancillary intermediary functions, and yet such agencies routinely received between them collectively anything from 25% to 50% of the sales receipts of each title published. Their spread was a response to and a consequence of ever-increasing scale (of outputs), of the globalization of elite scholarship, and of the possibilities created by digital discovery, but (in much the same way as learned societies had tended historically to be the elephant in the room in discussions of journals finances), the role of these intermediaries in the monographic publication process, and indeed the nature of the value they added, had not been subject to scrutiny in anything like the same way as the initial publisher proposition. Not least as their functions, with the exception of those performed by a very small and declining number of traditional bricks-and-mortar booksellers, were often invisible to members of the scholarly community beyond those working for university libraries or in research management.

  • Monographic Intermediaries and the ‘Confused Middle’: Bookshops

To state that Amazon is the most important academic bookseller in the Anglophone world is both to state the bleedin’ obvious and to recognize how the locus of academic book-buying (at an individual, and to some extent institutional level) has changed utterly over the past twenty years. The shift over the past two decades, throughout the English-speaking world, in the stockholding character of the great majority of traditional campus bookstores from being centers of scholarly, indeed monographic range to becoming almost entirely student-focused ‘textbook, college supplies and café’ propositions, with extreme patterns of seasonal demand leading (logically) to a temporary pop-up proposition in many institutions, has been inexorable. So, likewise, has been the retreat from both Barnes & Noble and Waterstones from any kind of extended academic proposition.

The appeal of the ‘university press crossover title’ has been the subject of some interesting public debate in the UK recently (notably in The Guardian newspaper). It’s important to emphasize that for all but a tiny handful of UK shops (notably Foyles in London and the Blackwell’s flagship store on Broad Street, Oxford), it’s these sorts of books, elegantly published by the likes of Princeton, Yale and Chicago Presses, and equivalent titles from Penguin, Profile and elsewhere, that form the bulk of their academic or quasi-academic stock beyond textbooks or classic texts like King Lear or The Prince. The recent, much-publicized listing of ‘the twenty most important academic books of all time’ provides a telling insight into the UK academic bookselling view of this world, and readers of the Scholarly Kitchen will note just how many of the top twenty are Penguin titles, whether by original commission or subsequent incorporation. By current academic standards of specialism there is perhaps one single ‘monograph’ in the list, Edward Thompson’s Making of the English Working Class, a classic Penguin nowadays but a work initially contracted by Gollancz for their Men and Ideas series (and delivered by its author at about four times the contracted length…).

Esposito and other colleagues have previously presented data on the Scholarly Kitchen from their own researches (for example into the sales patterns of the University of Chicago Press) showing that for some major presses, non-institutional sales of monographic material have been anything up to 50% (sometimes more) of the totality, and that these non-institutional acquisitions are even more print-centric than library monograph purchasing. Amazon has clearly been the determinant entity in monographic consumption of this kind at the individual level, with wholesalers and distribution agencies like Ingram/LSI also making a very powerful contribution, helping (crucially) to advance towards the nirvana of simultaneous title global release in all formats (print or electronic): for those of us working largely within the transatlantic nexus, it’s all too easy to underestimate the considerable obstacles (not least fiduciary and regulatory) that still prevent the smooth global circulation of academic texts, especially in the Southern Hemisphere.

There is still a long way to go, even in relatively mature research environments like (say) the Brazilian, and of course the great majority of humanities and social sciences imprints, if not the very largest, remain utterly dependent in many parts of the world upon a network of local or regional sales agents, wholesalers, and distributors (of which the UPS operation in Japan was one of the best-known), not least to ferret their ways through precisely these internal tangles (at a cost to both publisher and end-user concomitant with the complexity). The fact that Amazon (who also sell books in very substantial quantities to university libraries around the world) have set a very high standard of customer service in times of speed and accuracy of supply has also, of course, had huge implications for scholarly expectations of customer service within the institutional environment.

  • Monographic Intermediaries and the ‘Confused Middle’: Libraries

In its own way, nonetheless, the transmission of printed monographic material for individual consumption remains relatively straightforward. Infinitely messier, and likely to remain so for some time, seems to me the world of monographic library supply, in its broadest sense, both in print and especially online. Recent consolidation amongst major library suppliers, exemplified by the ESBCO/YBP acquisition, has at one level reduced library choice yet further, following a wave of previous such consolidations in recent years: it suggests also a pretty considerable, although not necessarily insuperable, size of market challenge for the library suppliers themselves. The terrible complexity of the current situation (from the library perspective) is in how these various wholesale and library supply propositions butt up against and interlock with a whole series of other propositions, involving (inter alia) major large-scale aggregation services like Proquest, e-book consolidation on the NetLibrary model, single- or multi-publisher e-book platforms, acquisition, cataloging and data management functions, with a whole series of new market entrants undertaking some tasks traditionally performed within institutional libraries and now outsourced, and some tasks wholly new to the digital information age. In both cases, these tasks are (as in almost every aspect of our current monographic challenge) a consequence of scale, and the sheer expansion in research outputs.

Many years ago the History Faculty in Cambridge organized a Publishing Panel. I was asked to focus in my brief talk upon historical monographs in particular, and I managed to annoy several of the younger, more ambitious members of the audience by stating that (rightly or wrongly) the identity of individual monograph authors was much less important to global library title selection than publisher brand, bibliographic coding and data feeds and, to a lesser extent, author affiliation and territorial background. I said that Cambridge had built up a pretty good reputation in the world of library supply (this was about a dozen years ago) for the quality of its ‘invisibles’, and in particular its metadata proposition, which was one reason, in addition to the robustness of Cambridge acceptance procedures, why libraries worldwide still bought our monographic outputs in reasonable numbers. We at CUP, like a number of other leading presses, actively helped through a number of intermediaries to make the job of the research acquisitions librarian (and hence researchers) more ‘doable’, in ways that, to be frank, several faculty members of the audience found incomprehensible.

This message, which of course links with the oft-cited credentialist and quality-proxy elements of publisher branding, was not, as mentioned above, particularly well received, and the very concept of the ‘approval plan’, and what that said about monographic author identities (or lack of) was clearly anathematic to some. This hostile reaction was certainly a sharp lesson in the failure of the academic publishing industry (and not just primary publishers) to convey the complexity of the seemingly-simple author-reader relationship, especially in an institutional context, and I don’t detect any evidence that in the intervening period things have improved. Approval plans (always much more a feature of the North American library landscape than anywhere else) have declined in number in the present century, reflecting institutional pressures upon library acquisitions of a different kind, but monographic publication remains a fairly brutal, systematic, data-driven undertaking – a brutality that arguably grows as distance from the privileged world of (say) the elite transatlantic academy increases.

In this systematic context it’s a commonplace, of course, that institutional e-purchasing has made the accumulation of library usage data much easier, and that institutions report much greater access to and usage of e-monographic material than of the print equivalents: they also report, famously, rather greater user frustration with the ‘access experience’ (on which more below). Marketing expenditure by the large commercial players in a library context is increasingly concerned in consequence with post-sales usage, although at the same time concrete evidence globally of the much-discussed ‘monographic e-tipping point’, triggered by generational shift, remains rather hard to come by. Likewise, there is anecdotal evidence of some partial slowing-down in the spread of Patron- or Demand-Driven Acquisition models, with long-term sustainability at the core of publisher unease, although whether the plug can now be put back in that particular bath remains to be seen.

What this reinforces once again (as Esposito commented on the Kitchen earlier this spring) is that volume seems to be becoming as central to the monographic sales proposition as to the serials, certainly among commercial publishers, and it’s surely the quest for scale that explains (for example) the recent expenditure by Taylor and Francis of over $30 million on the 14,000 monographic properties formerly belonging to Ashgate Publishers. Monographic aggregation, and the possibility of the Big E-Monographic Deal, is driving this, as it is aspects of the multi-publisher platform propositions now trading, although in fairness the latter are as much about ensuring e-visibility and accessibility for under-capitalized small imprints, and ensuring that they can compete with the Big Guys, as about a quest for bulk for its own sake. The Big E-Monographic Deal will bring with it, from a library perspective, the same major administrative and access advantages, and the same major financial and acquisition-choice disadvantages, as its serials parallel. That certain key library imperatives are in active and often conflictual tension is something that we all need to recognize.

Likewise, these multi-publisher platforms (including JSTOR, Oxford Scholarship Online, Cambridge Books Online) share the vices of their virtues, including sometimes markedly divergent reader-access practices and experiences. Librarians compare and contrast the heterodox proposition that has arisen in this sector with the relative ease of use of Big Science Serials, and sometimes wish that in darkened rooms late at night two decades ago a publisher cartel had agreed upon a single platform standard with a common user-experience that would apply across the monographic spectrum, with appropriate hyper-linking and other facilities. The expectation some have articulated that in time there would be an inevitable consolidation among these competing platforms, for precisely these reasons, does not (as I write) seem to be happening, and indeed if anything the complexity of the situation grows. This also, of course, makes it markedly more difficult for new publisher entrants to gain significant library traction on their own, whether publishing on traditional or Open Access models and whatever their pricing advantages.

In the first part of this posting I referenced the fascinating three-part blog posting by Rupert Gatti of Open Book Publishers. One of Dr Gatti’s major claims is that open access online publication on the Open Book model is sustainable, going forward, in part because the model is anything up to ten times cheaper in sales and distribution terms than current legacy publishing models. Dr Gatti admits, however, that the traction of the Open Book model amongst university acquisition librarians is, as yet, relatively modest, and an otherwise sympathetic and supportive librarian would have to acknowledge that it may be precisely because of the reluctance and/or financial inability of open access monographic start-ups to invest in these sales and library distribution mechanisms for monographs that their library penetration has been limited. This is the other side of the oft-recognized ‘Elsevier Paradox’ within research universities, that the Elsevier proposition (and its associated multi-million-dollar investments in research ancillaries) facilitates the work of university research actors more than any that of any other publisher, commercial or non-profit, and is why, in the end, universities buy more material from Elsevier than from anybody else, and yet that very same proposition represents at the same time a massive and potentially malign financial distortion within those universities.

It’s worth pausing on this, because the actual reasons why Elsevier, and other major serial publishers (especially in the sciences), are very, or indeed excessively, profitable, and their monographic counterparts, whether commercial or non-profit, very considerably less so, are surprisingly little considered (as opposed to the morality of the situation, endlessly debated on the academic library blogosphere). The determinant variation between the two operations lies not (as many faculty members and external commentators still seem to assume) in the simple costs of production, but in marketing, sales and distribution costs in a context where one is largely (although not exclusively) an online proposition selling repeat business for a (relatively) select number of titles through a finite number of intermediaries (if any), and the other is largely (although not exclusively) a proposition of print, selling new or new-to-the-customer material in huge and ever-expanding diversity of both titles and supply mechanisms.

Scientific serials famously made the transition from print to online global dissemination rapidly and in a manner welcomed by almost all users, with (whether the scholarly community likes to recognize it or not) Science Direct as a massive catalyst for collective change and improvement. Monographs in the arts and social sciences obstinately have not and (currently) are not, and the important materiality of the printed codex in the domain of (especially) humanities research is something that (say) Geoffrey Crossick and a leading open access advocate like Martin Eve have both recognized. A better online reading experience remains the single greatest requirement for any significant extension in online monographic consumption. Without that, it seems to me inevitable that all major monograph publishers will be confronting an extended dual cost base, even as short-run printing spreads and inventories decline, for many, many years (decades?) to come.

  • Text, Data and Policy: the Peculiarities of the English

The other fundamental aspect of so much intermediary activity within the scholarly communication nexus (as reflected at Charleston) is of course that plethora of functions related to research analytics, research management, metrics and research data, and this gets to the heart of a further (and maybe particularly British) monographic challenge, going forward. At a time when so much investment in and expenditure upon research materials is devoted to ancillary tools and modes of data manipulation, and where open access and other supply-side models posit a very different, author-driven approach to the valuation of content, what happens to the scholarly model that presupposes the primary value in the research object itself, and a research object that is constructed not from manipulable or reusable data, but from textual archives over which the individual researcher may have absolutely no control whatsoever – indeed, in some instances be a supplicant of (the Estate of TS Eliot, to take one famously combative example)? The response of numerous British learned societies in the arts and social sciences to the recent OAPEN/JISC guide to Open Access Monograph Publishing made precisely this point (inter alia). For British scholars, subject to a much more centralized regime of research management and assessment than will ever emerge in the much more variegated public-private university sector of North America, these anxieties are no longer hypothetical, but very real, funder-driven imperatives of concern.

This current posting is not the place to enter the arena of major contest that is copyright licensing, although the anxieties of numerous researchers in the arts and social sciences about pure CC BY itself (not least the much-debated ‘enticement to plagiarism’) have been well chronicled. But it is surely appropriate to make the case for the legitimacy of text-based intellectual enquiry (the core of much of the humanities, after all), which currently runs the risk of being downgraded precisely for its lack of ‘data opportunity’, and of being exposed to inappropriate publication models and protocols designed, fundamentally, for altogether different, scientistic modes of enquiry. In the specific UK context, HEFCE (the Higher Education Funding Council, theoretically at arms’ length from the government), the aforementioned REF (Research Excellence Framework, successor to the Research Assessment Exercise), and RCUK (the umbrella body for British research councils, subject to more direct governmental control) have all been determinant in the creation of a set of science-dominated policy assumptions in which it is hard to argue that the funding, organization and dissemination of research in the arts and social sciences is much more than an afterthought.

It’s also perhaps worth noting for non-British readers of the Scholarly Kitchen, the symbolically very significant fact that British universities do not, in governmental terms, sit within the ‘Education’ ministerial portfolio, but are an aspect of the Department of Business, Innovation and Skills. As of 2020, all article-based research submitted for the British RAE must be available in an open access form: monographs are not included at this stage, although it seems clear that the overall thrust of policy is that in due course they will be. HEFCE commissioned the Crossick Report in part to help establish the context, and potential groundrules, for such a transition, and the fact that the Geoffrey Crossick notably was not asked to provide a set of uniform policy recommendations, and insisted that any transition had to work ‘with the grain’ of established scholarly practice, has clearly been the source of some discomfort to the commissioning body. Incidentally whether HEFCE itself is long for this world is (as of mid-November 2015) by no means certain either.

  • Monographic Continuities

This extended posting has deliberately stressed some of the continuities in monographic publication modes, as a counter to the perhaps excessive emphasis on ‘disruption’ prevalent in much such commentary. At its most fundamental, the core author proposition around monograph publication remains, for a large number of established researchers in the humanities, broadly defined, remarkably unchanged: I write a book or chunks of a book on Economic and Social Change in the West of England, 1680-1750 (the title of my own, strangely uncompleted doctoral project), I approach a press I know to have published in that general area before, the relevant editor takes a pair of broadly supportive external reports to which I respond, after various bits of internal scrutiny the whole is put before the press publication committee who agree to offer a contract, and away we go. That experience remains, I would suggest, considerably more common than some of the more excitable current commentary would suggest.

As I explored in the first part of this posting, the sheer volume of research now being undertaken has clearly placed this legacy model under very significant strain but it is not, yet, broken. The dangers, as Geoffrey Crossick made clear in his Report, are in what lies ahead (and in a British context it is just worth stating that the likely outcome for research and library funding for the arts and social sciences of the upcoming Public Spending Round later this November will make all previous exposure to ‘fiscal pressure’ seem like an August Bank Holiday lark (to echo the most famous British university librarian of the previous century)). I am also painfully aware that my pragmatic realism is your privileged complacency, and that for colleagues working in the Digital Humanities, or other interdisciplinary sectors, the current situation is not at all satisfactory – but then it famously wasn’t satisfactory for interdisciplinarity in the old, ‘solid’ print context either. Nonetheless, it is important that the mainstream majoritarian experience is presented as that, with all its acknowledged flaws and drawbacks. The Crossick Report asked whether it was substantially more difficult for a high-quality monograph to find a publisher than had been the case in the past, and whether it was substantially more difficult for scholars to get hold of a published monograph that they wanted to read. In the British context, Professor Crossick found it hard to answer ‘yes’ to either proposition, a conclusion from which (as will be clear) I do not dissent.

I have mentioned before that the positive ‘research and publication practice impact’ of the myriad of projects that have investigated monographic futures over the past two decades has been nugatory. Actual concrete publication or distribution experiments (like Knowledge Unlatched, Open Book Publishers, the revived (in purely open access form) UCL Press, or the new California Luminos project) have proved consistently more fruitful. The current British Arts and Humanities Research Council project on the Academic Book of the Future, in which I have a small advisory role, was (important to note) avowedly not set up as an open access project per se, although there are obvious points of crossover. The relatively modest scale of this two-year project happily means that its findings will never pretend to be the last or decisive word, although I suspect that one concrete outcome will be a renewed focus on untangling the complex and expensive mechanisms with which monographic research is distributed from publisher to readers around the world, and which, if collectively reduced in both complexity and expense by (say) 10%, would result in a very considerable increase in the happiness of the scholarly community as a whole. It’s there, in the confused middle, that I would suggest our attention might profitably be focused in the years to come, whether we work as publishers, librarians, or as anybody else claiming to add value to the process of scholarly communication, a process that yet, in a Galilean way, still moves.

Discussion

7 Thoughts on "Guest Post: Richard Fisher on The Monograph: Keep On Keepin’ On*, Part Two"

Richard, your focus on the “confused middle” is very welcome. One clear challenge for open access monograph publishers, that we are encountering at Michigan for the OA portion of our list, is how to work effectively with partners in the information supply chain whose current business models rely on taking a portion of a “retail price.” At the moment the availability of print ancillaries for sale makes OA versions somewhat discoverable (even if the print records don’t mention them). But what happens to complex online projects that are impossible to represent in print and are only distributed OA? The crucial role of infomediaries and how new business models (e.g., “shelf-ready” fees for OA ebooks) can support them in an OA environment certainly deserves further discussion and your essay provides helpful framing.

Interesting article. I am one of these Charleston conference regulars and have attended over 30 years. I was one of the speakers this year in a session on changes in book distribution. My research shows that academic libraries are losing their dominat role as the primary buys of university press monographs. In the US Amazon now represents 40% of sales at major university presses. Another finding is that publishers are in control when selling e-books. Most STM publishers sells 70% or more of their eBooks direct to the library or consortia. E-Book packages are popular with libraries. YBP and ProQuest sell many single EBook titles but all are sharing about 30% of the market. Libraries in the US continue to use the majority of their acquisitions funds for e-journals and databases. Books funds continue to erode.

It is not necessarily a good thing that Amazon has come to represent such an important channel for sales given that Amazon has a well-deserved reputation for acting like an 800-lb. gorilla in imposing its will especially on smaller players like university presses. When I was director of Penn State U.P., Amazon threatened to de-list all of our titles unless we agreed to use its POD subsidiary BookSurge (now called CreateSpace) exclusively.

I agree that it is important to focus on the role of intermediaries, so I repeat the concern I raised about the first posting in this series, viz,, that it ignores the #1 problem that academic publishers now face in deciding whether to place their monographs in e-book aggregations, based on the fear of losing sales of paperback editions, which constitute 40% of overall revenues at some presses (as they did at Penn State). Because sales of monographs in paperback editions are so very difficult to predict, it is all too easy to make mistakes that can prove to be financially disastrous. I often use the example of a monograph written about a single Latin American country, which was a revised dissertation, selling over 30,000 copies in paperback. Who would have predicted that level of sale for such a narrowly focused work? Yet a decision today to put that kind of book into an e-book aggregation could potentially sacrifice hundreds of thousands of dollars. There is no easy answer to this dilemma.

Comments are closed.