It’s always perplexed me — why search engines aren’t paying publishers and other content sites access fees in order to gain permission to crawl them. Instead, content providers kowtow to search engines and pay third-party firms to perform search engine optimization (SEO) work. Maybe we all drank the Flavor Aid back in the heyday of Web hysteria. Maybe we bought the “we’ll bring you traffic” story — a story that has proven true but insufficiently compelling. Whatever the reason, a potential licensing opportunity bulldozed right over publishers.
Now, the German government is planning on a more level-headed approach, according to a recent New York Times article. Chancellor Angela Merkel’s Christian Democratic Union is planning to introduce legislation soon that would force search engines to pay fees. German publishers have been agitating for this for years. According to the story:
Under the proposal, Internet aggregators and search engines would have to pay the publishers if they wanted to display all or parts of their articles — even small snippets like those that are shown in search links.
The proposal postulates a central fee-collection organization — such as what ASCAP does for music — which would collect on behalf of publishers and disburse the fees proportionately. Interestingly, the proposal will likely include a one-year embargo, after which certain pieces can be displayed at no charge.
On its surface, this proposal makes a lot of sense. In STM publishing in particular, aggregators are common, and contracts with them are the norm. Aggregators build businesses on top of publishers and across publishers in ways publishers cannot quite do so themselves, often adding custom search services on top. The licensing arrangements that facilitate aggregation are on the upswing in many areas.
It also creates the opportunity to reassess how valuable various content pieces are. In the realm of printed books and CD-ROM software, including titles and abstracts at no cost to the likes of MEDLINE made a lot of sense. After all, you wanted a researcher to have a good idea of what they might find in the stacks before they walked down the hall or to the library. Now, with everything a click away, does the value of the title, the abstract, or the summary change? Should NCBI be paying publishers for their most condensed, most fussed over content (titles and abstracts)?
This will be an interesting piece of legislation to follow once it’s introduced. Will it change the game outside of Germany? Will US publishers begin to rouse themselves in a similar manner?
9 Thoughts on "Do Search Engines Owe Publishers? A German Proposal Raises the Question"
“It’s always perplexed me — why search engines aren’t paying publishers and other content sites access fees in order to gain permission to crawl them.”
Really? It’s always perplexed me that publishers and other content sites aren’t paying search engines for the coverage they receive. It seems pretty clear to me that a business that sells access to content needs people to be able to discover that content.
“Chancellor Angela Merkel’s Christian Democratic Union is planning to introduce legislation soon that would force search engines to pay fees.”
And it doesn’t make a lot of sense to me that this would be a matter for legislation. Surely it’s for individual businesses to decide whether they want the free traffic, or whether they prefer to try to make money by charging for indexing? There’s already a perfectly good mechanism (robots.txt) for telling search-engine crawlers to stay out; all that’s required is for Springer (say) to set theirs up to exclude all crawlers, then start approaching Google, Yahoo and the rest to try to cut deals.
… but of course they won’t be able to cut deals if I’m right that the publishers need search engines more than vice versa. The way I imagine that going down is that Google will just shrug and omit Springer from their index; and Springer’s traffic, and perceived importance, will drop away. The old saying that “the Internet perceives censorship as damage and routes around it” (John Gilmore, 1993) remains true, and applies to other kinds of barriers than censorship. The bigger picture here is that publishers, like everyone else, exist within an ecosystem that they can’t dictate to. They can either play on that ecosystem’s terms, or opt out. But they have no leverage to change the rules.
On the surface this sounds logical, but is this a bit close to asking indexing & abstracting services to pay for the privilege of covering academic journals? If not, how does it differ?
Why have search engines passed up the opportunity to charge publishers for crawling and indexing their sites?
Do you have a reason to be believe that a free market is not operating here? If it is a free competitive market then one should conclude that the market has established a price of zero for both being crawled and being indexed.
Except for the SEO scam — (it’s a scam – SEO scamers charge a very high price for a very easy operation). The willingness of publishers to pay that premium suggests that it is the search engine that is not appropriating the price of the value it provides.
If you had a choice of paying $9.99/year to Google to be in the index instead of paying $9.99/year to GoDaddy to be in the DNS system.. which would you choose? (By the way, Google could do that. Google was made domain names an anachronism, and the domain name sellers a scam).
So the free market is working with Google but not SEO companies? One is a scam, the other not?
I didn’t say Google is free market! It’s anything but. That’s why Google is going to be awful when it goes evil. Google could start charging for lots of things – like placing in search results – and could charge for being in their index, and an enormous proportion of the web would pay up.
SEO is a not a free market because it depends on the ignorance of the purchaser. Watch a couple of (free) Google videos and you know everything there is to know. Free markets require informed participants.
I am much less worried about Google than you are — not because I particularly trust them not to go evil, but because the Web is open and there are other search engines, and if Google reduces its utility then it won’t be the best one any more. Remember how dependent we all felt on AltaVista before Google came along and started doing it better? The non-proprietary nature of the underlying Internet protocols means it’s a level playing field for competitors.
This proposal is crazy! I want as many search engines as possible to index my journal in full. The search results should, however, point to the abstract of the article, which is open for the world to see.
It will be interesting to see where the chips fall and what sustainable models emerge. Taken to a logical conclusion, if content truely does become commodity in academic and scholarly publishing we could conceivably have a search engine searching wikipedia. yea.
On another note, how in the world can a govt legislate this? I don’t think it could happen in the US …
There is a slight but very significant misunderstanding here. The new law would *not* be applicable to stm publishers and scholarly articles. It would only apply to newspaper and magazine articles in the shape of the products of trade publishers. That is to say, the journals and articles of Springer would be unaffected, those of “Axel Springer” (one of the major press corps in Germany but, despite the name, unrelated to the stm Springer). In an English-speaking world, this would be: Murdoch products would be affected, Elsevier’s won’t.