The logo of Microsoft Bob.
Image via Wikipedia

My professional time is pretty much evenly divided between representatives of established organizations in the publishing world and upstarts who are looking to create something new, often at the expense of the established companies.  (And within these two broad categories, there are both for-profit and not-for-profit entities.)  This post is addressed to the legacy publishers, for-profit and not-for-profit alike, who perhaps take too much pleasure when a new company or a new (perhaps outlandish) initiative of an established company fails.  I have thought about addressing a different post to the upstarts, but they never listen, ever.

Many years ago, I was working for a corporate conglomerate that just happened to own a big portfolio of publishing companies. Now, you might ask, why would a pillar of capitalism invest scarce dollars in an industry that is backward, cramped, and not likely to grow? The answer was that the conglomerate, which had originally been built by buying up manufacturing concerns, presciently understood that the US was going to lose its manufacturing base, so it diversified rapidly into intellectual property businesses. There was a CEO of the combined publishing businesses, whom I will simply characterize as “colorful,” though some people would undoubtedly have offered adjectives that were stronger. He offered this parable at a meeting that I was invited to attend and at which some of my colleagues were hung by their heels for missing their numbers that quarter.

Let’s imagine (the colorful CEO said) an industry in which there are 10 competitors.  Although they differ somewhat in size and strategy, all of them have meaningful pieces of the market they serve. One day, a new technology or business process comes on the horizon. All of the competitors study this carefully.  It’s a big deal, the kind of thing that could transform an industry. But there are questions about it. It might not work. It will be very expensive to implement. The implementation process itself will distract the management from the core business.  And the core business ain’t bad:  it continues to grow steadily in the single digits every year, is profitable, and has good characteristics for cash flow.

Why fix what isn’t broken?

The arguments against using the new technique are strong enough that five of the 10 competitors decide to sit it out. They will stay focused on their businesses, attempt to get closer to their customers, and continue to work hard on execution, improving their products and services year by year. And they make this decision because they are very good at what they do. Their customers like them; their brand is highly regarded.

The other five companies, however, choose to pursue the new opportunity. Their reasons for doing so are not necessarily identical. Some of these companies are a bit smaller than the others in the overall category and are thus motivated to take on more risk. Perhaps one or two have began to believe that the market will undergo consolidation in the coming years and that the 10 competitors could be reduced to three or four; they don’t want to be among the acquired. Or perhaps one of these companies has a new CEO who simply has something to prove — to the industry, to the Board, to him or herself. Whatever the reasons, five companies decline to pursue the opportunity and five companies decide to run after it.

Several years pass. The new technique has now been tested, and the market has spoken. It comes as no surprise that many of the companies that chose to explore the new technology failed miserably at it. One suffered from cost overruns, another found that its technical capability was not up to the task of implementing the new procedures, another got so distracted with the problems of launching the new process that it took its eye off the ball and began to lose existing customers to its rivals. In the end, four of the five competitors failed in their attempts and either went out of business or had to be sold to a competitor.

The lesson for these four was clear:  innovation is a difficult task and few who take it on are successful.

As the unsuccessful management teams dusted off their resumes and began to look for new jobs, they were met with predictable homilies:

  • If it ain’t broke, don’t fix it.
  • A bird in the hand is worth two in the bush.

The wisdom of the ages never felt more tangible to them than at the time that they failed.

But how about company #5? This company pursued the new opportunity and got it to work for them. It wasn’t easy, and they made many mistakes along the way, causing them to doubt their decision many times; but in the end they prevailed, perhaps as much by luck as by talent. Now they were the one company in the market with a new way of doing business. They acquired some of their competitors and saw their market share grow. Most importantly, the new innovation enabled the overall market to grow, thus company #5 had a bigger slice of a bigger pie. They too were met with homilies:

  • To the victor belongs the spoils.
  • Nothing ventured nothing gained.

There appears to be a wise old saying for every possible occurrence, making wisdom a contradictory and unreliable pilot.

But what of the other five companies? They were prudent and passed on the new technology. For a time, they continued to grow, in part at the expense of their rivals who were struggling unsuccessfully to implement the new technique. But after a while, the one company that had succeeded with the new technology began to take away the customers of the more conservative firms. The new technology drove down costs, bringing new customers into the industry, which only the successful innovator could serve. And the new technology improved service, which helped to pry away customers who had worked for years with the innovator’s rivals.

In the end, the five prudent companies began to see their growth stalled. Customer defections put pressure on revenue, which triggered rounds of cost-cutting. Cost-cutting made it harder to invest in new technology, making it increasingly difficult for these competitors to compete with the industry’s one successful innovator. In time, these five companies found themselves to be marginalized in the marketplace. Some were sold to the successful innovator, others developed a more limited strategy servicing a small niche.

The moral of the story (the colorful CEO said) is that the outcome is the same for companies that take risks and fail as it is for companies that don’t take risks. There is no place for prudence in business, no place for holding a pat hand.

I offer this parable to those who criticized Microsoft for the unsuccessful launch of the “Bob” user interface; to those who ridicule Google for the once much-ballyhooed Wave project, now aborted; for critics of the many unsuccessful attempts (and still counting) at harnessing social media to STM publishing; and for people everywhere who confuse skepticism with foresight.

Great companies have the ability to fail well. If we want to be critics, let’s focus our attention on companies that have not made a mistake in the past 10 years.

A few weeks after the CEO delivered his parable, he fired my boss. Soon after, I joined my colleagues in being hung by the heels at an operating review.

Enhanced by Zemanta
Joseph Esposito

Joseph Esposito

Joe Esposito is a management consultant for the publishing and digital services industries. Joe focuses on organizational strategy and new business development. He is active in both the for-profit and not-for-profit areas.

Discussion

11 Thoughts on "A Parable of Innovation in Publishing — A Mostly True Story"

Amen! Those STM companies that dipped a toe in the waters of this thing we call social didn’t fail, they experimented and learned things from what they were doing (Eg. Spam – it’s a seriously massive problem and hard to solve for). The other day I read that James Dyson, inventor of the Dyson Cyclonic Vacuum Cleaner, failed over 5,000 times before he got it working.

If only life were this simple, we could all live by parables. Unfortunately there is another case, probably far more common, where all the firms that try the new gimmick lose. In that case skepticism was the winning strategy.

More realistically, however, every firm is trying new things but risk capital is a scarce resource so they can’t all try the same new thing. In this case skepticism toward most innovations is unavoidable, simply because there are so many possibilities. That is certainly the case in publishing today. The trick is not to drown in possibilities.

Not sure I buy the parable either.

If you can, do something – but do something that only you can do (subject area, people, expertise, belief) where you can see what it might offer you. What you learn may not be what you expected to, but learn all the same for what you do next. If you’re not convinced by the latest fad, let it go, but build flexibility and monitor in case you’re wrong.

Small is ok, don’t need to bet the shop on every innovation. It’s not necessarily the app that will benefit your business, but what the app teaches you might.

The parable may not have relevance in many cases, particularly where there’s no “first-mover advantage” and no dominant hold on a particular market is rewarded to the risk taker.

In the case of something like eBay, the first company to crack the online auction market gained a dominant hold on the entire market, because there was a need for centralization: more sales and more customers in one location makes for a better marketplace. It’s unclear to me though, how many of the risks currently being taken would result in that same centralization.

Looking at how people are approaching the use of semantic technologies as one example: several large publishing houses are spending upwards of seven figures on experimenting with these technologies and implementing them on their journal platforms. So far, the results have been underwhelming, but should they hit on some useful functionality for readers, what’s to stop every other publisher from implementing that same feature?

For presses without an enormous warchest of money to spend on experimentation, is the role of a “fast follower” more appropriate than that of a “first mover”?

What do these semantic technologies do or look like, David? (Assuming it is not a secret.) I might want to fast follow them. If it is RDF triples don’t hold your breath.

Besides I have a bit of a content monopoly with my DOE report portal: http://www.osti.gov/bridge/, so I am not looking to compete with anyone. We recently implemented semantic “more like this” technology and it works very well, but it is not new. The new software came with it.

Your basic point is well taken. I just developed a new semantic algorithm that finds the core articles for a given problem or topic. (I call it the X-Portal where X is the topic.) But if one publisher implements it it will be relatively easy for others to reverse engineer it. This is part of the “information wants to be free” problem. Knock-offs are trivial.

ScienceDirect did a big project doing semantic analysis of their journals, and introducing ways of adding additional information to text within journal articles. Here’s an unflattering review of the results:
https://biochembelle.wordpress.com/2010/09/25/e-publishing-sucks-or-why-im-still-killing-trees/

And I know that there are other publishers working with similar technologies (not sure on how public these are yet, but others reading this who want to chime in, please do so).

The Scholarly Kitchen’s own Michael Clarke is the Executive Vice President of SilverChair, a leading supplier of these technologies. More info on what they do and some case studies here:
http://silverchair.com/semantics.aspx

Don’t forget the role that funding bodies like Mellon can play in encouraging experimentation while reducing or eliminating risk to the publisher(s) trying out a new approach, as in the Gutenberg-e project. This can be critically important in paving the way to the future, especially in non-profit publishing where capital resources are scarce.

Thanks for the shout-out David. I think semantic technology is one case where experimentation and dabbling gets you not much of anywhere. That is because what semantic technology allows one to do can, in most cases be done other ways – the problem is that those other ways are a lot harder, a lot more expensive, and take a lot more time. And when one is in a market that is changing at the rate the information industry is changing, speed is can become more than a small competitive advantage.

But here is the thing – you don’t get there by half measures. You can’t bolt semantic technology on to existing systems and processes and expect to see any substantive benefit. You have to go all in and rebuild your entire architecture – the entire way you think about information – from the ground up. Otherwise, like the example David C. cites above, you end up with (at worst) half-baked “experiments” that really don’t tell you much of anything, or at best a few well executed features that are nice but are not going to move the needle in terms of market share because none of them are, in and of themselves, a “killer app.” The killer app is dozens of subtle improvements that in the aggregate add up to something demonstrably better that can be built faster.

That can’t be copied unless one is willing to first do the hard work.

Thanks Michael. My question is which semantic technologies are publishers actually investing in, there being so many? XML (including mashups, etc.), RDF (including triples), OWL, taxonomies, ontologies, tagging, natural language processing (including IBM Watson), more-like-this, clustering, networks, emergence, diffusion, visualization (Boyack graphs, etc.) and so on?

The term “semantic technologies” has become so broad that it is almost meaningless, like “artificial intelligence” in the late 1980’s. On the other hand, NLM has poured hundreds of millions into semantic technologies in the medical area, so I am sure there is something there. We poor cousins in the physical sciences would like to know what is actually working. Hence my question.

Plus don’t rule out small, game changing stuff. I have some small stuff in development that is pretty good. For example an algorithm that estimates the learning level of scientific writing, based on a semantic model of science education. (People have no idea how cognitively complex science education really is.) History suggests that little things (like HTML) can make a big difference if they are basic enough.

So where do you see which semantic technologies working?

I completely agree that “semantic technology” has become a very broad bucket and you are right – people use it when they mean different things.

And different technologies work in different fields and in different scenarios. In the medical sciences, we see normalized taxonomies and (automated) tagging as two technologies that can do a lot of heavy lifting. If one can figure out what your content is about, normalize that to field-standards, and then tag that content so that it can be quickly and easily found, that is a huge accomplishment. Entity extraction (of say, drug names or gene sequences) can also be deployed in interesting ways. With these tools one can quickly build new products, or new channels through existing products.

Ontologies are very interesting but hard to assemble and everything I’ve seen there is still pretty theoretical – but that holds great promise.

Will not count out the small stuff! Fair enough. Google is really just an algorithm after all. Would love to see what you are doing with yours.

Commenting a bit late on the story, but the parable is right along the lines of the argument in “The Innovator’s Dilemma” by Clayton Christensen (which also provides a number of case studies from various industries and lot of supporting data)

Comments are closed.