Capture of the Pirate, Blackbeard, 1718 depict...
Capture of the Pirate, Blackbeard, 1718 depicting the battle between Blackbeard the Pirate and Lieutenant Maynard in Ocracoke Bay (Photo credit: Wikipedia)

When publishers meet, it’s not uncommon for the conversation to get around to piracy and copyright infringement. These are big items for publishers:  after all, you invest money in bringing intellectual property to market and then want to be confident that you can reap a return on that investment. How to deal with piracy was the topic of a recent ALPSP conference in which I participated through a Webinar. I want to recap my thinking on this topic here, and I have pasted in the slides for that presentation below.

I confess to having been startled when I learned the name of the ALPSP session:  “Fraud and Piracy.” Before I slowed down and read the title a second time, it seemed to be asserting something of a Hobson’s Choice:  if you can only choose between fraud and piracy, which side are you on? This in fact was not a Hobson’s Choice, but there is more than a note of defensiveness in a session so named, and I would like to suggest that we have to get beyond that. The problem with being put on defense is that it can interfere with innovation, and that compromises the future.

Before I dive into this I want to state my own view of copyright, lest anyone believe that my appeal for a different strategy can in any way be equated with an outlook that “information wants to be free.” I am a firm believer in copyright, which I think is one of our society’s great inventions. It makes perfect sense to me that if someone invests in the creation of something, there will be a strong desire to protect and benefit from that investment. If anything, we need stronger copyright laws, not weaker ones: the benefit should always flow back to those who put time and capital at risk, that is, authors and publishers.

But there is a problem with this outlook, which we should not forget, and that is that it is a rearguard strategy. Whatever I may believe about the virtues of copyright is of little value if my organization has to spend more and more time in enforcement. While I don’t want to let the crooks off the hook, I don’t want to spend my time chasing them either. A pragmatist can ask a different set of questions about copyright and emerge as a practicing agnostic.

The problem with piracy begins in the medium itself, which, as McLuhan instructed us many years ago–and it’s still good to be reminded of this today–is the message. When we work with print we design products that, naturally, work in print. Print has certain properties, among which is that of the fixed text. We are so familiar with print that we tend to take for granted all the externalities that grow up around it. For example, because print gives rise to the fixed text, it in turn gives rise to the ideas of the definitive text and the version of record. It is astonishing how much effort has gone into establishing what are authoritative texts–astonishing because we take for granted that this property is a desirable one. Things could have gone in a different direction; we could, for example, have developed admiration for the idea of the provisional text. But, no, we want that definitive thing, we want to get it right. Now, no one is ever going to argue for getting it wrong, but the urge to get it right grows out of a medium that insists upon one way of representing something. When the fixity of the text disappears, it becomes more tenable to entertain ideas that are themselves not fixed.

Unlike print, the Internet is a digital network. As it is digital, it permits easy copying. As it is a network, it enables easy sharing. Unlike print, it lends itself to texts that are dynamic in nature, texts that are more like conversations than a version of record.  It has other properties as well. For example, it makes collaboration much easier than in the era of mailing drafts around. A colleague and I draft our reports using Google Docs, which saves us an enormous amount of time; something like this was not even available a few years ago. I don’t mean to ignore the negative aspects of the Internet (the misinformation, the disinformation, the bullying); I am not arguing that the properties of the Internet are better than that of print, only that they are different (is radio better than a movie?).  In any event, the Internet continues to evolve: the iPhone and Kindle were both launched just in 2007, but they have ushered in the era of mobile computing and digital reading. Who wants to venture what the landscape will look like 7 years from now?

When you think about piracy in terms of the properties of media, what becomes clear is that Internet piracy is the inevitable outcome when you take content created for one medium, that of the fixed text, and insert it into another medium, that of the dynamic Internet.  There is no meaningful solution to this problem that does not change the nature of the content itself. The way to prevent piracy, in other words, is to innovate around it:  create different products and services that are inherently not copyable and hence not subject to piracy.

Broadly speaking, any publisher has 2 options. The first of these is the rearguard defense. This is the appropriate (or I should call it the inevitable) strategy for any publisher that has economically meaningful assets to protect:  sales, profits, cash flow.  No one ever walks away from free cash flow–never.  It’s amusing to think what the founders of PLOS would do if they suddenly found themselves at the head of an Elsevier or John Wiley, with all the responsibilities for those assets (not to mention shareholders, employees, authors, customers, etc.). They might be horrified by the responsibility, but they could end up doing exactly what the managements of those firms are doing now, assuming they had the talent.

A rearguard strategy looks at the Internet and determines that it is necessary to suppress some of its properties. For example, it tries to take some of the conversation-like quality out of Internet publishing. It may do this through the use of digital rights management (DRM) or by insisting on restrictive licenses (e.g., a prohibition on interlibrary loan).  The sleepy legal department gets a higher profile as copyright enforcement begins to play a larger role, and the public affairs department supports trade associations that pursue anti-piracy agendas. And this, all of it, makes perfect sense. You have assets to protect and you protect them. The problem, though, is that it is a rearguard strategy and by definition looks backward.

While the benefits–the protection of cash flow–of a rearguard strategy are clear (when it works), there are costs as well, which may be hidden.  The biggest cost is in the constant vigilance that copyright protection requires. A staff member or several get assigned to police the Internet; vendors are lined up who promise to find every instance of unauthorized use; resources that may have been put into creating compelling products now go into setting up a line of defense around products, making them less compelling. The biggest cost, however, is in the management time that anti-piracy activity demands. No company enters into litigation lightly; every decision of this kind percolates up to the head of the organization’s leader. So now, instead of having a CEO negotiating an arrangement to distribute products in Asia, instead of working on a new category of products or opening up a sales channel, an organization takes its top people and has them look backward. I repeat: no one ever walked away from free cash flow, but there is a cost to this in management time.

Meanwhile, we continue to to create and publish the very same things that were being infringed in the first place.

A vanguard strategy, on the other hand, starts from a different place.  Rather than focus on creating and defending the fixed text, a vanguard publisher seeks to create content that is like the Internet itself, content that is dynamic and conversation-like. A vanguard publisher seeks to learn from and adapt consumer media without sinking to the level of mere chatter. The goal is dynamic texts, real-time data feeds, and a network dependent on multiple nodes, whether of people or data sources. We know how to pirate a fixed text, but how do you pirate a network?

Part of the reason to pursue a vanguard strategy is that it speaks to the reasons people get into the publishing business in the first place. You would be hard-pressed to find someone who got into this business because they like spending their time with lawyers; no one wants to put a higher priority on stopping unauthorized use than on stimulating authorized use; no one believes that publishing has been the same since the invention of the written word. A vanguard strategy speaks to the personal ambitions of the staff, whereas a rearguard strategy evolves into a culture of beleaguerment.

Easier said than done, you might say, and I think that is true: a vanguard strategy is very difficult to implement. Everything has to be invented. The focus shifts from business and marketing to editorial. All that work that has been done in, say, reengineering workflow is beside the point:  workflow optimization speaks to the need to lower the costs of an organization, but a vanguard strategy is about the top line; it’s about growth. Of course, not all players will be able to participate in this game. Some very good people and organizations that have flourished with the rearguard strategy will be flummoxed by vanguard publishing, which is often hard to understand in its early stages and may defy attempts to produce a meaningful analysis of return on investment.

So our vanguard publisher, always seeking to be new and unprecedented, looks for dynamic data and evolving platforms. Instead of print we have mobile strategies, bypassing the PC-centric programs of the leading STM publishers today. A vanguard publisher may seek to place sensors, whether hardware or software, on mobile phones to drive data collection. In such a model the person who owns the phone is really playing the role of a host. The collaboration on data-gathering can lead to aggregating and analyzing data, which is then distributed in real-time feeds to subscribers.

When I first drafted this presentation, I realized that I was describing an existing service: Waze. Waze is a mapping service; you use it to get driving directions.  But unlike the mapping services that preceded it, Waze continuously reviews its recommended routes in light of data that is being constantly updated. Users install an app on their phones and then report on such things as traffic, potholes, and accidents. How can you pirate this? But Waze is a publication. It’s a new kind of publication, a real-time piracy-resistant publication. So we should not be surprised to learn that Waze has being acquired by Google. Whatever happened to Rand McNally and its once-ubiquitous print road atlas?

When I have presented the idea of dynamic services to publishers, many of them have said that doing something like Waze is just too hard. They don’t have the software talent, they don’t know how to market something that won’t stay put. Putting aside the stubborn fact that the marketplace owes no publisher a living, I am not so sure that publishers do not have some of the necessary capabilities and assets within their reach. Publishers know how to build reference works, but how about stimulating the creation of a database that could be used by all members of a clearly defined scientific community? Why not include real-time adjustments to this database? Publishers work with a huge number of biologists and chemists around the world, but they traditionally engage them as individuals. Would it not be possible to work with them collectively, creating subscription-based memberships to essential (dynamic) information resources?

Different publishers will address the matter of piracy in different ways. An established publisher, with assets in hand and revenue coming in through the door, will inevitably adopt a rearguard strategy. A start-up, on the other hand, is likely to think in terms of the vanguard, if for no other reason than that the established publishers have already staked out their territory. But an established publisher cannot stop with a rearguard strategy; over time that strategy has diminishing value. So a vanguard strategy is essential for old and new companies alike.

What we in publishing have to come to terms with is that piracy is not only about bad guys and weak tools for enforcement. It is in part about those things (and let’s be quick to deride anyone who wants to romanticize people who infringe intellectual property), but it is also about a failure of editorial imagination. While we wait for our society to turn out copyright-compliant, law-abiding citizens, let’s focus on what we have under our own control, the editorial nature of our own publications.

Enhanced by Zemanta
Joseph Esposito

Joseph Esposito

Joe Esposito is a management consultant for the publishing and digital services industries. Joe focuses on organizational strategy and new business development. He is active in both the for-profit and not-for-profit areas.


21 Thoughts on "Rearguard and Vanguard"

“Part of the reason to pursue a vanguard strategy is that it speaks to the reasons people get into the publishing business in the first place. You would be hard-pressed to find someone who got into this business because they like spending their time with lawyers; no one wants to put a higher priority on stopping unauthorized use than on stimulating authorized use.”

*clap clap clap*

I could not agree more.

But by the same reasoning, stopping unauthorized use is not a strong reason for pursuing a vanguard strategy. The vanguard strategy is an end in itself, requiring a huge, dedicated effort. Also we may expect dynamic piracy to grow with it, so if the goal is to defeat piracy it does not work. It only defeats fixed text piracy. In fact providing information that constantly changes probably raises a host of new legal issues. Revolutions are like that.

“It’s amusing to think what the founders of PLOS would do if they suddenly found themselves at the head of an Elsevier or John Wiley, with all the responsibilities for those assets (not to mention shareholders, employees, authors, customers, etc.). They might be horrified by the responsibility, but they could end up doing exactly what the managements of those firms are doing now, assuming they had the talent.”

Joe, can’t you see it, the founders of PLOS already solved the copyright problem and it is working out pretty well for them financially.

APC funded OA might create it’s own set of problems but solves the copyright problem.

I approved this comment, but it is clear that you did not read the post.

David S, APC funded articles are still under copyright, which can or must be protected, so I do not understand your claim. Even CC-BY requires attribution so is subject to piracy.

It’s true in the sense that copyright no longer needs to be relied upon for purposes of protecting a market and a business strategy. In an OA world copyright remains valuable mostly for authors (attribution, etc.), not for publishers.

David, perhaps I did not word what I said as clearly as possible.

Publishing an article OA essentially removes the economic concerns about piracy for the copyright holder. The cost of publishing and any profit are provided by other means so it is just not an issue. As a copyright holder you are not going to lose revenue because someone is pirating your intellectual property.

I know it has been discussed on this blog ad nauseam but in my view the consequences of disreputable journals or others stealing attribution or violating CC licenses are far overblown as is the ability of a publisher to protect an author from these consequences. You can disagree but it is not worth rehashing this issue again.

I see the major purpose of a CC license as letting the honest people to know what you are allowing them to do with your property without having to go and get specific permission from you.

No idea why you reference CC in this context. You still have not read my post, which is your prerogative. I find it odd that so many people add comments when they don’t read the original post.

Because I was responding to David W. response to my post.

“David S, APC funded articles are still under copyright, which can or must be protected, so I do not understand your claim. Even CC-BY requires attribution so is subject to piracy.”

Sorry Joe. This looks like a case of the comments wandering away from the post, which is not uncommon. I did make what I think is a relevant comment at the beginning but no one responded to it.

Yes, this is what happened. It’s also a bit of a glitch in WordPress. When you go to approve a comment, you can’t easily see which comment it is linked to. So I was accusing David Solomon of not having read what I wrote, but in fact I was not properly reading what he wrote!

Joe, one such vanguard product might be the ACS’s ChemWorx. It won this year’s PROSE award for both app and eproduct. In essence it’s a work management tool, but its brilliance is in how it creates tools for both authoring and sharing work within the chemistry community, using elements of social networks , cloud storage, and authoring tools that have been designed specifically for chemists and their kind of publishing. In a way it’s both a rearguard and a vanguard effort in that the creation and initial dissemination of ACS content is being done within a walled garden, providing a level of protection for that content that wouldn’t happen if the work was being shared and collaborated upon on the open Internet. Of course this does leave one wondering if sheltering this content from wider scrutiny by keeping it in-house is an overall good thing for the discipline.

From a business standpoint the for-profit scholarly journal subscription market is what can reasonably be called a mature market. It’s been doing what it’s done for years, the market is essentially saturated and unlikely to grow–given the ongoing rounds of journal cuts and the (very appropriate) pressure on libraries to ensure that content favors the current needs as demonstrated by demand the market can even be shrinking. Standard business practice is to ride a mature market until it dries up. A transportation system based on liquid fuel is going to be reluctant to invest in another distribution system until/unless it has a high probability of being the technology that eventually ‘wins’. Our meat industry is based on the portability of dried corn even though cows aren’t really very well equipped to digest it, the farm industry likes it that way because of their high investment in specialized equipment to plant, grow, harvest and transport corn but it is also a mature market. In scholarly publishing publishers pay for some costs and risk, absolutely, but they do not pay for the content and pay very little if anything for the labor of the editors and editorial staff. Technology allows manuscripts to be submitted, routed, edited and fed into production with very little waste; the editors and reviewers use their own computers, printers and supplies and volunteer their time. What continues to be lacking in Scholarly Kitchen (though I love this blog and read it daily) is the realization that this is an increasingly outdated business model and publishers who want to protect their revenue, stock holders etc. had better start developing those new business models before the next generation of scholars renders them obsolete.

What you say here (about editors being paid and the like) is true only for scholarly journal publishing, not scholarly book publishing.

It is at least partly true for at least some scholarly books. For example, I was recently invited to act as an unpaid reviewer for some chapters of a scholarly book on palaeontology.

I have written book chapters and introductions for scholarly books and have never been offered any payment so while I understand that text books that become standards and required reading are cash cows for publishers and authors it certainly isn’t always the case.

Good article. In business — and publishing is a business — you move ahead or you fall behind. However, I fear you’re forgetting one important part of the equation: authors.

I would love for my journal to move ahead with dynamic and interactive content. Why any biologist doing field sampling, for example, doesn’t strap on a GoPro camera is beyond me. Modelers can best explain their esoteric works with interactive demos. But I don’t exactly have authors beating down my door with cool applications. The old system of print-and-forget has served them well for decades. How do I move my authors into this brave new world?

The problem of piracy did not begin with the Internet, of course. The major battle that led up to the 1976 Copyright Act had to do with photocopying of printed text. The CCC was started in response to that threat. It has been magnified hugely by the properties of the Internet identified here. But one way to reduce the problem lies in the OA approach itself, which disconnects publication from dependence on the market for generating revenue. It creates new problems at the input side, of course.

As an example of a dynamic and interactive publication, what about CogNet, Joe, which MIT Press initiated a long while ago, as you well know, having served on its board?

-Your statement on PLOS’ founders is insulting.
-When you address what the publishing business is about, the first thing is “shareholders”, the last ones being “authors” and “customers”, which is pretty much the order of priorities of Elsevier, Wiley, Springer, etc. They don’t care about scientific publishing, they care about money.
-Editors like Elsevier, Wiley, etc. do not produce ANYTHING. The works they publish are written by volunteering academics, and reviewed by unpaid panels.

One could argue that print-based publishing of scholarly work, especially textbooks, enabled overreaching copyright claims and now, with the internet, the jig is up. Look at any textbook. It’s primarily a collection of facts and ideas, neither of which is copyrightable.
That said, I like the idea of journals as a dynamic (living?) data base where the value of the whole is far greater than the sum of its parts, the articles.
Now, how do we create a saner system for participating in the life of this data base? A system where even the appearance of exploitation is nowhere to be seen?

Comments are closed.