English: A clock made in Revolutionary France,...
Image via Wikipedia

[Editor’s note: This is the edited version of a presentation that Joe delivered at the NFAIS annual conference in Philadelphia on February 28. Joe wishes to thank Bonnie Lawlor and Jill O’Neill for inviting him to participate in the conference and for the use of the groovy clicker for audience polling. The slides and text of the presentation appear on the NFAIS site.]

I have been asked to talk about some publishing scenarios for the future. We all know the danger in this — a couple years from now, someone will find this presentation and remind me how silly some of these predictions are. On the other hand, you cannot operate a business unless you are thinking about where the world is heading, so this is a risk that comes with the territory, even if it inevitably makes you look foolish. The unfortunate truth is that it is easier to predict what the future will not look like than what it will, which perhaps explains why for every futurist, we have hordes of debunkers.

I will be focusing on four different scenarios. That number could be a lot bigger. We could, for example, get into a discussion of how K-12 textbooks increasingly will be integrated with enterprise learning management systems or how college instructors outside the elite institutions will have a diminishing role in the selection of the texts for their classes; or we could spend time on any number of other things (financial stringency of the mega-journals, anyone?), but I have chosen these four because they represent things that many of the organizations I have been working with have been thinking about. Change the organization and you change the conversation.

So what are those four scenarios?  They are:

  1. Extensionism
  2. The Face-down publishing paradigm
  3. Hybridization of online and physical worlds
  4. Library bypass

I will explain what all these terms mean in a minute, but first I want to talk about the act of forecasting and scenario-sketching itself. Let’s start with two of my favorite quotations.

First we have Niels Bohr:

Prediction is very difficult, especially about the future.

(Indicentally, that quotation has been attributed, with slight modifications of phrasing, to many other people, including Yogi Berra, Casey Stengel, and Samuel Goldwyn. If you are confused by this, remember Goldwyn’s dictum: “Anybody who goes to a psychiatrist should have his head examined.”)

The Bohr quotation humbles us about making predictions, which is a good thing.  But it also invites us to ask what else could we predict except the future? Now, since Bohr was a physicist, his notion of the future may be different from yours or mine. Which brings us to another quotation:

The future is already here — it’s just not evenly distributed.

This quotation is by William Gibson, a prominent science fiction writer. Among other things, Gibson coined the term “cyberspace.” If he is right, that the future is already here, we should be able to look for it now.  That would reduce our risk in making predictions. Instead of a crystal ball we would use a microscope.  If this sounds like an outlandish thing to do, consider how it would have worked in the past.

So we go through the wormhole back in time 5 years. That brings us to February 2007. What was going on then?

To begin with, many of the devices we now take for granted did not yet exist. The iPhone was still four months away; the Amazon Kindle wouldn’t be launched until the end of that year; and the first e-reading app, Stanza from Lexcycle, needed 16 months before it came to market. It’s easy to underestimate the importance of Stanza, as it separated the software reading application from the underlying hardware, something we now experience whenever we read a Kindle book on an iPhone, a Nook book on a Samsung tablet, or a Google ebook on an iPad. Stanza is also instructive in that Amazon bought the company and essentially took it off the market, signalling us about how aggressive some of the players in this market can be.

It’s amazing to think how much has happened in five years. This makes you feel that predictions are hopeless.

On the other hand, let’s consider what we could have seen back then. We were all using Blackberrys.  That means we were getting used to handheld devices, which linked us to email and our desktop computers. We all owned iPods. Or if we didn’t, our kids certainly did. With the iPod you got a primitive content-management system, iTunes, which ran across multiple platforms. Now think about that: multiple platforms. Hmmm. For professional and academic publishers, we were already seeing pressure on library budgets, something that became even more pronounced just a year or two later. And if we were paying attention, we saw that the very arrogant, money-losing Public Library of Science had launched a beta test of PLoS ONE, which changed the nature of peer-review, perhaps forever.

If we had looked hard enough, however, we might have been able to come up with some pretty good predictions. For example, if you held the iPod in one hand and a Blackberry in the other, you might have gotten a picture of what the iPhone came to be. If you used iTunes, you would have had some idea about a new form of personal content management, something that we now see in the app stores that are springing up everywhere and in such STM services as PubGet and Mendeley. We might have reflected that more and more of the far-reaching innovations were taking place in the consumer market and that the companies behind those innovations were very big and aggressive. We should keep that in mind when we consider what Amazon could grow into if it attempts to enter the library market more directly. We would certainly have observed that the institutional market was not growing as it once did, which might have encouraged us to look to diversify our marketing mix. And we might even have had a glimpse of the advent of the megajournals, something that is changing the landscape for many of us today. In other words, the future is already here–it’s just not evenly distributed.

So let’s take a look at a few things that are going on today and see what they might mean in five years.

I use the term “extensionism” to describe a common strategy of established publishers. The idea here is that you already have a business up and running and want to extend some aspect of that business into a new area.

Since we see a lot of extensionism in the world today, it’s highly probable that it is going to be part of the future.  Extensionism has certain properties. For example, you can’t extend a business if you don’t already have a business in the first place. In other words, this strategy is purely the prerogative of legacy publishers; start-ups need not apply.

I regard extensionism to be the default strategy: this is what you do if you do nothing else. And it makes perfect sense to be an extensionist. Let’s say you have free cash flow of $1 million a year. Are you going to walk away from that? I don’t think so. Suppose you have free cash flow of $10 million a year. Are you going to ignore it? How about $50 million or $100 million? The fact is, the more cash flow you have, the greater the impulse to hold onto it. For a management team in that position, extensionism is the obvious choice. This is the central thesis of Clayton Christensen’s “Innovator’s Dilemma,” and it has wide application to the publishing industry today.

Most extensionists begin with a SWOT analysis, which helps them determine what their next move is. They work from strength and try to counter threats. What they don’t do is look for disruptive technology. The reason for this is obvious:  with their cash flow, the extensionists are the ones that would be disrupted. And this is the Achilles heel of extensionism, the possibiity that a start-up could do something disruptive to the marketplace.

Extensionism takes many forms. Let’s say you have a very strong position in the library channel.  You may then decide to pursue a channel domination strategy and bundle journals and other content types into aggregations. We will be seeing more of that in the future. Or you may have content that is viewable on a PC, which you try to make viewable on a mobile phone.  It’s the same content, but now it has been adjusted for the smaller screen. An obvious strategy is to look for fold-in acquisitions. There will continue to be a great deal of this, though it’s very hard to get a bargain.

Some publishers are trying to extend their business by copying PLoS ONE and creating author-pays services. We will be seeing more and more of these. At first glance this seems like an extension of the legacy business. For example, a chemistry publisher may create a version of PLoS ONE for chemistry. But it’s a bit trickier because this model, with its reduced level of peer-review, challenges the very basis of traditional journals publishing. There may be a fault line in this strategy

I want to turn now to another scenario, which I call the face-down publishing paradigm.

What is face-down publishing? You probably are familiar with the typology of lean-forward and lean-back media consumption. You lean forward when you use a personal computer. You are engaged with the process; you create content. But at the end of the day, you might go into lean-back mode:  you sit on a sofa and passively watch content. The face-down paradigm is something new. Here you have a mobile phone in your hand and look directly at it, with your face pointing down. This is different from using a phone for a phone call, where you hold the handset at your ear. The face-down paradigm is intimate; you commune with your personal device, even if you are in a public space. All of these paradigms are anchored in human anatomy, which makes them feel natural.

But isn’t this the same thing as simply putting content onto a phone? No, it isn’t. Whereas an extensionist may have moved print content to PCs and then moved it again to mobile phones, with the face-down paradigm, the content has been created from the beginning with the mobile phone in mind.  That little device in your hand is connected to the Internet cloud. That means that you can have regular product updates and other forms of dynamic content.

Let’s push this paradigm a bit further. If the content is dynamic, we know that the business model must be subscription-based. We also know that there is a cost to generate dynamic content, so publishers will seek ways to lower that cost. One way to do this is to have more content be algorithmically generated because computers cost less than authors and editors. For example, as we walk about, the service may generate updates through data-mining and link those updates via GPS. We already have this in the consumer market; how long before we have sophisticated scientific apps that “know” where we are and offer different information as appropriate?

As an aside, I should mention that data-mining gives a different meaning to the term “disintermediation.”  We always are hearing that the Internet will disintermediate publishers or that the Internet will disintermediate libraries. But data-mining disintermediates authors, not publishers; the algorithm does the creative work. And as we increasingly move to the “Internet of things,” where machines are connected to the Internet expressly to communicate with other machines, we are disintermediating readers as well. So perhaps this is another scenario: publishing by machines for machines. I wonder who does the peer review and whether the NIH will demand that these new publications be deposited in an open access repository.

Implementing the face-down paradigm is not going to be a piece of cake. You have to work with small screens; you will have to get access to superior IT skills; and you will have to come up with new ways to reach the market.  But perhaps the biggest problem is that you will likely have to deal with a new group of partners, handset manufacturers and wireless phone companies. These are big guys and they control access to the user. So one part of this scenario is an industry increasingly influenced or even dominated by consumer tech companies.

Let’s turn now to a third scenario, the hybridization of virtual and physical venues.

Every day we are seeing more and more linkages between the online world and bricks and mortar.  Most obviously, Apple is now one of the world’s great retailers. Apple is a direct-marketing company that reaches you on the Web, on the phone, and in the stores. All the merchandising plans and all the service are completely integrated. When you consider the kind of sales Apple is racking up, you have to assume that other companies will try to copy this.

But I think that Barnes & Noble is the most potent example of hybridization. B&N has had some very tough times lately, and no one could have predicted that they could have rebounded to a 25% market share for e-books. That’s a bigger market share than they have for print books. They did this by leveraging their physical stores. It may have saved them. Now we have rumors that Amazon will be opening a test store in Seattle and that Google, which is now a tech company, a media company, and a hardware company, has something planned for Ireland.

It’s really beside the point whether Amazon and Google open up stores. The point is that someone will.  We have to assume that hybridization is going to emerge over the next few years, and we have to figure out our strategy for dealing with it, even profiting from it.

Although hybridization seems like an inevitable trend, that doesn’t mean that it will be easy to monetize it.  There is no clear path for professional and academic publishers, for example. I don’t expect to see John Wiley or Elsevier open up stores in a suburban mall next to a video game arcade. On the other hand, would we be entirely surprised if Pearson bought Kaplan or Sylvan Learning? Perhaps the way to play this game is to create partnerships with bookstores and libraries.  I am particularly intrigued by the conference business. There is a Silicon Valley company that provides coupons for mobile phones that are delivered while someone is making a presentation. If the presentation, for example, makes a reference to a management principle, the coupon everybody receives may be for management training seminars or for books.  You can click on the coupon and make a transaction before the presentation is completed.

The fourth scenario I wish to mention is library bypass. By “library bypass” I am referring to a marketing strategy whereby a publisher that has had a legacy business in selling products to libraries begins to seek other avenues to those libraries’ users. This is not the same thing as a pure consumer strategy because the users all have instititutional affiliations. The aim of library bypass is not to reach out to the world at large but only to those users whose needs formerly were served by libraries.

The basic question is how dependent you are on sales to academic libaries. The related question is where you think the library market is heading. If you think it is going to grow by 10% a year, you do one thing.  If you think it is flat or even declining, you are likely to do something else.

Since the future is already here, we can see that many publishers are placing bets on a declining library market.  I think that’s the prudent thing to do. The evidence for this is that librarians keep telling us that their budgets are shrinking. I sometimes wonder if librarians understand that they are making a strategic mistake:  by talking about their money woes, they reduce their clout with publishers. Librarians tend to argue on moral grounds, publishers on economic grounds. Most of the time, the money wins.

So a library bypass strategy is a natural thing to do. It’s not easy to implement, however. For one thing, libraries are great customers. They buy lots of stuff, they even buy things that no one uses, and they pay their bills.  Libraries are also highly efficient purchasing organizations. If you don’t believe this, spend a few months in the consumer market, whether you typically sell through channels and have to wrestle with trade receivables. You will come crawling back to the libraries.

The problem with a library bypass strategy is that it almost always involves some form of direct marketing to individuals. That means you need new kinds of information technology and a new group of people to handle end-user interactions. You also need to rethink the products themselves — are they designed with end-user consumption in mind? Another challenge is getting the balance right between the cost of acquiring new customers and the prices you can charge.

One of the important implications of a bypass strategy is that you already have an institutional market in place, so you have to balance your consumer offering with offerings that were conceived with other markets in mind. A case in point is the price of single journal articles. The prices publishers charge — $20 or $30 an article in some instances — makes perfect sense if you are a publisher, but makes no sense at all if you are an individual buying something with your own money. On one hand, you don’t want to cannibalize an established market. On the other hand, you want an offering that will actually appeal to your target customers.

One of the potential benefits of an end-user strategy is that it will provide a level of user data that you have never seen before. How you will use that data is another matter, but it potentially opens up important feedback loops for designing new marketing programs and new products. One of the unfortunate aspects about marketing to libraries is that you mostly are flying in the dark. You get aggregate data, but never data specific to an individual. Of course, to the actual user, that is all that is important.

I said the future is already here, and when it comes to library bypass we see it in such companies as PubGet, now part of CCC; Mendeley; and DeepDyve and its many imitators. These all represent new ways to work directly with end-users and to open up different revenue streams.

One caution about this area, and that is that it is highly likely that working here will put you in contact with the major consumer tech companies. What better way to reach end-users than with a Kindle edition? But now you have to work with Amazon’s repellent terms of sale. Or you want to create things for the iPad, only to discover that Apple is an arrogant and inflexible organization. This is something of a sub-scenario, a future for scholarly and professional publishing that is dominated not by publishers but by consumer tech companies, which are not terribly interested in the content you create and market.

I want to sum up here with some overarching trends to watch. The first of these is consumerization, the tendency for all future scenarios to be influenced, if not dominated, by huge consumer technology companies. It’s amazing to consider that a particle physicist, a molecular biologist, and someone lying on a beach with a novel by Stephen King may all be reading from the same device and  purchasing their content from the same venue. Maybe that’s not amazing; maybe that’s just scary.

The second trend is that mobile platforms are becoming primary. Currently that is not the case for most professional and academic publishing, but it is rapidly becoming the case in consumer markets.  In the past 5 years, we have gone from a time when people said, I will never read a book on a screen, to a situation where 20% of trade books are sold as e-books. That’s in fewer than 5 years. That may seem like yesterday’s news to publishers that have been selling digital journals and databases for years, but the new element is access through a mobile device. If you were to start a publishing company today, you would assume that the mobile device is the primary platform, so we are already operating within the face-down publishing paradigm.

So when you check your messages, face-down, on your phone today, ask yourself if the future is already here.

Enhanced by Zemanta
Joseph Esposito

Joseph Esposito

Joe Esposito is a management consultant for the publishing and digital services industries. Joe focuses on organizational strategy and new business development. He is active in both the for-profit and not-for-profit areas.

Discussion

11 Thoughts on "Predicting the Present"

“Let’s push this paradigm a bit further. If the content is dynamic, we know that the business model must be subscription-based.”

Why would that follow? Why wouldn’t I have (for example) dynamically updated content from BMC? For that matter, what is (say) the Google News front page is not dynamically content without a subscription?

True, but between teaching someone to fish and selling her a fish, I’d rather sell her a subscription to the fish. If you’re charging a fee for content, the subscription model is more compelling than not (cable TV), and dynamism supports the model.

Regarding the “library bypass” scenario:

This scenario already exists: there are few (if any) products available to libraries that aren’t also available to individual library patrons. What keeps patrons dependent on their libraries isn’t a restrictive access model or marketplace structure, but price. As long as an annual subscription to Brain Research costs tens of thousands of dollars, people will depend on libraries for access to it rather than subscribing on their own.

And this kind of goes to your point about how libraries undermine themselves strategically by complaining publicly about journal prices and slim budgets. In this context, marketplace structure matters a lot. One reason that libraries feel free to talk openly about their budget constraints, I think, is that for most of us, our budgets are a matter of public record — there’s no point pretending that we’re in a stronger position than we are because any publisher can very easily find out the actual size of our budgets. But a more important reason is that libraries aren’t in a position to “negotiate” prices in the normal sense; reducing our clout with publishers isn’t something we worry about much because we don’t have the kind of clout that matters when it comes to price negotiation. We can’t threaten to take our business elsewhere (because each journal is offered by a single supplier) nor can we really offer considerations other than cash.

I agree that appealing to morality is not a winning strategy — which is why I usually simply invoke fiscal reality. But the problem with doing so is that, just like the cod fisherman who intellectually grasps the dangers of overfishing but nevertheless is faced with powerful short-term incentives to keep doing it, the publisher isn’t often interested in hearing about the long-term consequences of manifestly unsustainable price increases. This is partly true because of strong short-term incentives to maintain such increases, and partly because publishers disagree that the increases are “manifestly unsustainable.” Just like the cod fisherman, the relentlessly price-hiking publisher considers the past to be the best predictor of the future, and the past has featured an unending stretch of subscription renewals. Price and budget trend lines will inevitably and decisively diverge at some point down the line, but that will be in the future and we’ll worry about it then. This year, another 10% increase will probably work.

Technological revolutions look inevitable in retrospect, suggesting that prediction should be easy. The problem is that there are too many predictions, not too few. The correct prediction is always there among the hoard, but impossible to choose. Going numerical makes this easier to see. What percentage of STM papers will be author pays in 2022? The possibilities run from zero to 100%, and the predictions probably run from 5% to 80% or so, probably in a bipolar distribution. One prediction is correct and the others are all false (but some closer than others). There is no way to know which is correct, even though we have it in hand.

I have two questions:
1) If “the algorithm does the creative work,” who owns the copyright? The creator of the algorithm?
2) You note that mobile has caught on in the consumer trade market much more than in the academic market. Why do you think this is so, and do you expect this gap to narrow or widen in the years to come?

The issue of who owns the copyright is a delicious one. You may want to check in on the work of Matthew Sag, who talks about “non-expressive use.” I am not persuaded, but many people are.

As for the growth of mobile, the issue is not cultural, as the pundits, would have it, but a hard law of network economics. Standards are set by having the most users, and the numbers are in the consumer market. This is not something to fight, but something to understand. The academy is no longer walled off from the society at large.

The concept of algorithm is so broad that different cases need different treatment. A spell checker is an algorithm. So is a search engine. Beyond that much of the speculation about computer generated content is probably premature, until it arrives in specific form.

An interesting new case is a commercial AI service that generates short newspaper articles summarizing high school baseball games, that are too many and small for a reporter to cover. You feed in the game stats and the algorithm produces articles, no two alike. I have been wondering about adapting this approach to summarize research papers, using semantic analysis rather than hits and runs.

Another case I have played with is an algorithm that analyzes the literature and answers complex questions about research activity. This would be along the lines of IBM’s Jeopardy winning system, but the answers would be more complex. I think DARPA’s machine readability program is, or was, working in this direction.

In these cases it seems like whoever owns the output gets the copyright, but I know nothing about copyright law. And of course this is not predictable.

Joe has brought a nice article here, but in the end I wonder how this helps us as an industry. The trends are very clear: Cloud supply, Mobile delivery, Consumers driving the agenda, major aggregation and distribution channels determining what content gets in front of users prominently. As publishers of content we can no longer just be masters in sourcing and enrichment, we have to be more involved with our users. Libraries ? Well Libraries play a major role in access (over time) to key resources and that industry is very big indeed. They need to reinvent themselves like we all do in a digital world. This takes leadership and vision, I have written about this before with Carl Grant. Some are really doing very good stuff, there is just too little of them today.

Comments are closed.