One of the axioms of the Internet is that bits are free. That is, once you pay for the cost of putting something online, the cost of distributing each copy of the original material is zero. People who actually work with online material know that this is balderdash (there is the cost of customer service, maintenance of the hosting service, potentially a cost for royalties, etc.), but certainly there is a difference in the economics of print and digital publishing. Relatively speaking, print has moderately high fixed costs and very high variable costs; digital material has even higher fixed costs, but tiny variable costs. This basic economic formulation has given rise to the world of the Internet as we know it today with a plethora of free services, some of astonishing value, of which Google is simply the most prominent. But it wasn’t always this way and it may not be that way forever. There are powerful parties who have an interest in placing a price on bits, and if and when that happens the Internet will evolve into something different.
A bit of history. For those who have been involved in the online game for a long time, such names as CompuServe, The Source, Dialog, BRS, and America Online (in its original form) will be familiar. All these companies started out in the pre-Internet era, and some survive to this day, albeit in reduced form (Dialog, AOL). These organizations managed their own networks and charged a fee for use, a fee that varied with the kind of content that was offered and the amount of time a user remained connected to the network. When the Internet came along, some of these organizations attempted to replicate this model, but soon it became obvious that this new vast network of networks would change everything.
As I recall, the game-changing business model was invented by an Internet service provider (ISP) called Netcom, which came up with the radical idea of providing residential dial-up access to the Internet for a fixed monthly charge of $19.95. Early adopters jumped on this, and not long afterward the AT&T WorldNet service matched the offer, bringing its brand name and marketing authority to a mass market. AOL responded with a fixed-rate service of its own (which, amazingly, still exists) and rapidly became the world’s largest ISP. Allowing for technical advances and a new cast of players, this is the world that most of us live in today. We pay for residential Internet access with a fixed fee, a fee that varies by ISP and speed. Like a subscription to Netflix, once you pay that fee, there are no incremental costs. This financial model gave rise to a great number of free content services, which has put enormous pressure on traditional content providers, who struggle to monetize their investments. We should bear in mind that the open access movement is based on the notion of free access to the Internet, a notion that reflects a particular period of history, which may be coming to an end.
If Netcom, AT&T, and AOL were responsible for creating one paradigm (abetted by Tim Berners-Lee, Mark Andreesen, and many others), the principal creator of the new paradigm was Steve Jobs, who transformed the world of computing with the iPhone. Mobile access to the Internet now exceeds access from PCs. A new ecosystem is growing up around smartphones (app stores, Twitter, Instagram). Although this ecosystem is primarily a consumer phenomenon at this time, scholarly publishers are getting into the act, too. I recently saw a group of proposals from scholarly content-management platform companies, and every one included a section on how a scholarly client would be able to make its content (journals) available on mobile devices. Now you can flip through the abstracts from a journal of biostatistics on your Droid phone or watch an animation of a medical procedure on your iPad. It truly is a new world, is it not?
There is no Internet without Internet access, however, and the ISPs that provide mobile access are not content to work with the business models of their brethren in the landline business. Mobile telecommunications companies are now placing data caps on subscribers, something you will experience soon, if you have not already, when you traipse over to an AT&T or Verizon Wireless dealer. The data caps are set fairly high (my daughter, who streams Pandora when she drives, has yet to hit the ceiling), but they are not really intended to tax the user. The real target is Net Neutrality and the opportunity to collect rents from two sides–that is, the real target is the Internet companies (Google, eBay, Yahoo, etc., etc.) that have enjoyed unfettered access to their many users.
Net Neutrality is a complicated topic, which is based in politics, not technology or economics. Its core principle is that everyone should be able to use the Internet on an equal basis. Thus a blogger can speak to the world with as much force as, say, the journalists at Fox News or CNN. Without Net Neutrality, large providers of content-based services would be able to purchase preferential access to users, most likely in the form of more bandwidth. It should be apparent that ISPs don’t like Net Neutrality, as they would like to charge more to content providers that use up a larger portion of the capacity of their network.
It’s never good to underestimate the marketing ingenuity of the telecommunications companies. I imagine a marketer driving to his local shopping mall, which charges a fee for parking. He then steps into the supermarket, buys several items, and gets his parking ticket validated before leaving the store. The shopping mall customer is supposed to be charged for parking, but in fact the fee is paid by the supermarket, which benefits from having the customer visit the mall. That customer then goes into work at Huge Wireless Telco, Inc. and declares that he has found a way to subvert Net Neutrality.
Switch to the world of mobile Internet access and you will see how this works. You pay, say, $30 each month for a wireless data plan, which comes with a cap on how much data you can consume (perhaps 2 gigabytes). But the producers of Internet content don’t want you to stop using their service when you get close to the cap any more than the supermarket wants you to stop shopping because you have to pay a parking fee. So the wireless telecommunications company approaches the biggest producers of Internet content and asks them to pay for the equivalent of a parking fee validation. Thus Google, eBay, Yahoo!, and their ilk would have to pay a price so that their services would not count for the individual’s data cap. We should not be surprised that these companies will fight the wireless ISPs very hard to prevent this from happening.
Let’s recap before moving to the response to the wireless ISPs:
-
Internet access has been unlimited for some time now, but that time may have passed.
-
More and more Internet access comes from mobile devices.
-
The wireless ISPs have found a way to put an end to unlimited access to the Internet.
-
Hence the model of “free information” is going to come under challenge, as content costs become implicated with bandwidth charges.
The Holy Grail is to break the chokehold that the ISPs have on Internet access. It is little remarked upon that the single largest share of revenue from the Internet has been delivered to the pockets of the ISPs–that is, the revenue from Internet access from Verizon, AT&T, Comcast, and their ilk vastly exceeds the revenue for Google, Yahoo!, and the other Internet media companies. (Ecommerce, as in Amazon and eBay, is a separate matter.) Thus Internet companies seek to break the hold that the ISPs have on their users. No company has put more energy into this than Google, which has investigated a number of ways to cut into the ISPs markets including bidding on spectrum and developing its own “phone” service, Google Talk.
Even more ambitious is Google’s invasion of the landline ISP market with Google Fiber. (You can tell Google is a youthful company by the choice of name. When I first heard about Google Fiber, I was puzzled why Google was getting involved with a dietary supplement.) While the low-cost and high-speed connections of Google Fiber are not directed to the wireless market, once the infrastructure is in place, it would not be a major engineering feat to hang WiFi access on top of it, thereby providing a direct challenge to the wireless ISPs. When you combine Google Talk with WiFi, there is no need to have a phone subscription of any kind.
More romantic is Google Loon, an experimental WiFi network being set up in New Zealand. Loon consists of interconnected hot air balloons that would provide WiFi network services to specific areas on the ground. It is being touted to provide Internet access to the developing world and rural areas, but it takes little imagination to think of Loon as someday coming to a neighborhood near you. Look! Up in the air! It’s a bird! It’s a plane! No, it’s Google!
This is the battle that is currently being waged: the chokehold ISPs vs. the advocates of Net Neutrality. The cost of content on the Internet will vary depending on the outcome. If I may editorialize, with all the current talk about “new” business models, it’s easy to forget that we all live in the world that Ted Vail built. The AT&T monopoly has formally been put to death, but it continues to live on in the form of the practice of Internet access. We don’t have “new” business models; we have “current” business models. Strategic planning in the content industries must take into account the much larger game being played by economic giants.
Discussion
13 Thoughts on "What Happens When the Marginal Cost of Content is No Longer Zero?"
There is also the potential for government intervention in various forms. Internet commerce is vibrant and vital to continuing economic recovery, future prosperity, etc. Therefore, the U.S. Congress and their kin in other countries will become more involved. Some may even nationalize their segment of the internet. BTW, even usage of the term “Internet” is changing (to “internet”) despite the style guides that insist on capitalizing.
A great essay, as usual, with a lovely joke about Google Fiber to boot!
I think your point about whether business models can be “new” or are just dealt from a pretty finite deck so that there’s a “current” hand being played is well-taken. Scarcity (buzzword) is a major driver of which cards have value at the table. The wireless carriers are creating scarcity, and others are seeking to remove it.
It’s always useful to reflect on analogous situations (your parking validation analogy) because then we can see which hand is being formed across the table.
Thanks for a great and thoughtful piece.
Excellent essay. Started me thinking about the long-term costs and pricing model for open access sites. One time fee, perpetual online access. The cost of maintaining the sites presumably will increase, however, there is only one fee payment by the author – the profitability of which is reduced over time. Could publishers start charging renewal fees?
I’m half-retired and involved with any of the companies mentioned, but worked @ Bell Labs 1973-1983, including 5 years in the Loop Maintenance Operations Systems lab, among other things managing software for analyzing Outside Plant operations, i.e., from local switch to subscriber. Another lab in the same building did much early work on cellular radio.
Indeed, this is messy and complex and political … but telecom has long been that way, and much of the public simply do not understand the technical infrastructures of the telephone system or the Internet or financial realities, plus government policy interactions. The only constant is the wis hto get someone else to pay 🙂
1) Infrastructure, especially the “last mile” costs $, although cellular radio has surely helped, especially in the developing world.
2) Long ago, the US government (and state PUCs) decided (certainly with Bell System concurrence) that universal telephone service was desirable. It costs far more to run a phone line to a rural customer than to handle an urban/suburban customer. The rules caused urban/suburban subscribers to subsidize the rural ones (and I’d guess the US Postal Service works this way, as well.) Whether or not this is a good policy is a subject of legitimate debate.
A new version of the same problem is the issue with DSL bandwidth.
We live in a suburban/semi-rural community just up the hill from Stanford. Unsurprisingly, higher bandwidth DSL lines are installed earlier in Palo Alto, Mountain View, etc, and some people at the top of Skyline Drive (really rural) still have none or low bandwidth. Should everyone be guaranteed high bandwidth at a moderate price? (reality: it is much easier to provide such in a dense are like Seoul, Korea).
3) Telecom provides the illusion of dedicated access, but it is an illusion. There are many shared resources and resources cost $ to build, install, manage and repair:
a) # cellphone connections and bandwidth handled at a cellphone tower, or the equivalent when using WiFI in a coffeeshop (or anywhere else).
An amusing but common occurrence is the juxtaposition of people complaining about poor service while blocking installation of new cellphone towers and antennas, even when disguised as trees.
b) Backhaul cable bandwidth from a tower to the local switch.
c) People might think they have a dedicated cable from their home to a switch, and sometimes they do, but sometimes multiple lines are multiplexed via a Subscriber Loop Carrier system. Service gets blocked if all resources are already used.
d) Switches (whether telephones or Internet) are shared resources. In the old Bell System days, it was a little easier to model, because the networks were engineered to handle (similar, low-bandwidth) telephone calls, with much study of busy-hour/busy-day (like Mother’s Day) statistics. If one over-provisions it is expensive, and costs have to be charged back. If one under-provisions, service degrades. A big surge of use can cause localized havoc at least:
a) Phone calls in disaster times, like earthquakes.
b) The “Boston astrologer syndrome”, in which an unanticipated blurted phone number crashed the New England telephone system in 30 seconds (many decades ago).
c) As I recall, early Victoria’s Secret and an Intel Superbowl web ad caused Internet congestion.
At least, when you made a (circuit-switched) phone call, once the call got through, you had the circuit. When network switches/routers get overloaded, they start dropping packets.
4) This is no defense of ISPs or anyone else, just observations about realities of telecom infrastructure, which generally has worked so well that people take it for granted.
Bandwidth may seem “free” sometimes, but it really isn’t, and any time there is a shared resource, there will be tussles over the allocation of costs. It’s not a new problem at all, but it is trickier now because we’ve gone from:
-simple, consistent phone calls, low-bandwidth
-to much more complex mixes of phone calls, Internet web browsing, videos, in which inexpensive user devices can place wildly-varying loads on shared resources.
-and with much trickier combinations of users, telecom infrastructure, and content providers, the latter really not being very important in the telephone-only days.
John,
I recently read John Gertner’s history of the Bell Labs. The breadth of the work you guys did, and the number of inventions we that originated at the Labs that we today take for granted, boggles the mind. The complexity of the telecommunications systems is a wonder of the age. We are all in your, and your colleagues’, debt.
I’m interested in the story of the Boston Astrologer syndrome and the crashing of the New England phone system (is this a particular incident?) – I can find no record of it on the web.
Best and thanks,
Mike
Thanks for the kind words! (and most of that was other people, I was only there 10 years, barely long enough to acquire a “Bell-Shaped Head.”
I forgot that the Boston astrologer story was long ago and not well known enough to not appear on the Internet, i.e., it is non-information 🙂
I heard it anecdotally only because I visited Boston monthly for years to plan and then execute a field trial for a new data mining system.
BACKGROUND
It illustrates the surprises that can happen to networks with shared resources.
#1 ESS electronic switching only started ~1965, as did switches from other vendors and it took decades to deploy widely. Before that switches were electromechanical, so there had to be a direct electrical connection the whole way.
So, a person picks up their phone, dials, and their local exchange (Class 5 attempts to route the call up the hierarchy, over-simplified as a tree, with leaves at Class 5, branches at Class 4, bigger branches at 3, etc. Modern networks are flatter, given smarter switches.
It’s been a long time, but I’d guess there were Class 5s, some Class 4’s, and then maybe Class 3’s in Boston, which was definitely the hub for New England. In an over-simplified model, a call went up the hierarchy as far as it had to, and then down the hierarchy on the other end, and since it was circuit-switched, tied up equipment at each step that no one else could use.
If, at the end, the Class 5 serving the callee, found the line was busy, then, the circuit would be unwound and equipment freed, but this took real time.
Reality was more complex, but that’s the idea.
If a radio or TV show was going to give out a phone number for a call-in, they were supposed to arrange it in advance with the phone company.
THE ASTROLOGER
An astrologer on a Boston radio TV show blurted out “if you’d like a horoscope, call at #.”
A raft of people picked up the phone to call, up the hierarchy, swamping not only the top of the local hierarchy in Boston, but other exchanges, as each call allocated equipment the whole way. After calling, waiting, and then getting a busy signal … people tried again.
Wham, in 30 seconds, New England phone system was down.
Telcos started creating special Choke exchanges. I don’t know if Boston didn’t have these or they had not yet come into use, but in any case, the astrologer’s # wasn’t on one.
These special-cased what has become common in later electronic switches, which communicate status to each other. Surges of traffic to something that is already busy generate “reorder” (“fast busy”) from the local exchange, thus avoiding the congestion.
Victoria’s Secret had Internet and server problems in 1999, but did better in 2000, which Akamai bragged about.
Anyway, as usual, with infrastructure shared at any level, it is always too expensive to provision to handle everything everybody might want to do, so people:
a) Design for normal load, with reasonable headroom.
b) Do special planning and resource allocation for known, expected surges.
c) Design networks so that unanticipated surprises try to choke off traffic where it causes the least congestion, as above, or as in 1989 Bay Area earthquake, shutting off incoming phone calls so that people could call out.
d) And there is always huge complexity behind the scenes to make something seem simple.
Thanks for the interesting blog post however I take issue to the following statement.
“We should bear in mind that the open access movement is based on the notion of free access to the Internet, a notion that reflects a particular period of history, which may be coming to an end.”
The open access movement was never based on free access to the Internet. It is based on the fact the incremental cost of distributing a digital copy of an article is so small as to be inconsequential as apposed to paper distribution where incremental printing and distribution costs of a single article are is significant. Digital distribution simply allows the possibility of funding publishing by means other than charging subscription fees.
Internet access is a separate service that has never been free. The fact that ISPs (not publishers or other content providers) are starting to put caps on the number of gigabytes a user can download is not the same issue. We’re also talking about caps measured in gigabytes where as the typical scholarly article might be a few megabytes.
David, thanks for making this excellent point. I’m afraid that the bias here is in favor of maintaining scholarly publishing as the “mini-me” of commercial academic publishing. Revenue is needed to continue funding editorial and staff positions in the current mix of for-profit and non-profit organizations. The idea that these functions could be performed in other, more efficient ways garners only polemical attention.
I’m a bit surprised to hear that Open Access publishing requires no revenue nor editors or staff. Many who work at and reap the profits from OA publishing may be equally surprised.
I don’t think that anyone is saying that the internet is free or that curation in scholarly and other kinds of publishing isn’t good and necessary. However, digital publishing now allows us, for the first time in a long time, to rethink the question of how best to acquire and pay for these affordances. Same-o, same-o may not be our best option in the digital era.
I’m afraid that David Solomon did not understand my post or perhaps he never read it. In a wireless world Internet access may come with a variable cost. This will affect all producers of content–big and small, fat files or thin, subscription-based or free to the user. OA is simply collateral damage of this long-term trend. Nothing wrong with OA. Enjoy it while the party lasts.