For having edited both Ernest Hemingway and F. Scott Fitzgerald, Maxwell Perkins became an iconic figure, the top representative of the art of editorial judgment. Indeed, a biography of Perkins by A. Scott Berg bore the subtitle Editor of Genius. In some circles the notion of editorial genius, or even editorial judgment, is dismissed as hogwash or perhaps an atavism of the print era. In others it is revered, whether the editor in question serves as a gatekeeper to novelists at HarperCollins or Penguin Random House or sits atop a leading STM journal. What is clear is that we still have gatekeepers regardless of the form of publishing. Increasingly these gatekeepers have a decidedly post-human aspect, as though Perkins had been downloaded into the shell of Arnold Schwarzenegger in his leading role in The Terminator.

The Terminator
Wax bust of Arnold Schwarzenegger as The Terminator (image via Wikipedia)

Of course, no one called it gatekeeping when Google announced recently that they were giving higher search-engine ranking to sites that use encryption. This is a somewhat technical matter with very large marketing implications. The editors–yes, the editors–of the Wall Street Journal did a nice job explaining this to us carbon-based life forms. If you are responsible for a Web site and all that entails–increasing traffic, identifying the optimal demographic group, having site visitors take desired actions–then what Google thinks is important is important to you. What Web manager can afford a downgrade from Google for whatever reason? Thus Google is using its leverage, its extraordinary ability to bring traffic to a site, to influence the engineering of Web sites everywhere.

Now, as The New Yorker would say, how’s that again?  One would have thought, one would claim to have been told, that Google’s rankings concern relevance. That is a human measure; Perkins was in the business of making that kind of judgment. But it’s not human measurement that Google is after. Google is expressing a bias (though it’s a stretch to call it an editorial bias) for a machine preference. This is a hard one for a humanist, or even a human, to swallow, but deal with it: we are well along the path from organic to cyborg to the purely cybernetic. Our role as humans is to serve as the hosts for the devices we use and carry with us.

The big tech companies are always being accused of bias, of course, though usually critics are thinking of bias of the old kind, as when William Randolph Hearst deployed the mass medium of his time to impose his political and moral vision on the world. (Hmmm. Didn’t Jeff Bezos recently purchase The Washington Post?) Some observers claim that Google’s search results are biased in favor of other Google services; see, for example, the attacks by the CEO of Yelp on Google’s “evil” practices. In the book world Amazon does not hide the fact that preferred results from searching its site can be purchased through cooperative advertising. Looking for a thriller based in St. Petersburg? Amazon will find one for you: it’s the one for which the publisher paid for placement.

It’s tempting to take sides in these matters, though I am myself resisting this (one) particular temptation. My objection to what Google does is not that they have biases but that they pretend that they don’t. Higher ranking for encryption? Why, yes: encryption is a good thing. Higher ranking for faster-loading sites? But of course: fast downloads are a good thing. What advocates of these practices don’t quite see is that this is a technological version of Soviet realism. We should have happy workers; we should rate all movies in which anyone smokes a cigarette as NC-17 (I actually heard someone propose this); and if we have an article that is wholly unoriginal, it goes to the top of the list–because it is encrypted and loads fast. It’s bad enough that we have communications that are required to be politically correct, but now they also have to be technologically correct. But don’t forget:  we are doing this because it is good for you.

At the top of the list, though–ranked far higher than load time or encryption or just about anything else–is the greatest bias of all, that in favor of free and open content. Users of Google Book Search and Google Scholar may be forgiven for not being aware that the core service, Google Web Search (that is, the service that people mean when they say “Google”), only finds things where access is unimpeded. Relevance? Quality? Importance?  These “human” values are nothing in comparison to the importance of the open field for Web spiders. The spiders then tell us what is important, inducing more and more creators of content, however reluctantly, to forego the business model that they prefer and that supports their editorial efforts.

What is disturbing about all this is that it is generally not appreciated that Google is making cultural decisions in the name of technological elegance. I doubt Google understands it either or if they do understand it, they don’t see why it is important. How can anything worthwhile be more important than the speed of the Internet, the handshake between two machines on opposite sides of the planet? Alas, we only have John Connor to protect us.  Spoiler alert: we know how this movie ends.


Joseph Esposito

Joseph Esposito

Joe Esposito is a management consultant for the publishing and digital services industries. Joe focuses on organizational strategy and new business development. He is active in both the for-profit and not-for-profit areas.

View All Posts by Joseph Esposito

Discussion

44 Thoughts on "And the New Maxwell Perkins is . . . Google!"

One would have thought, one would claim to have been told, that Google’s rankings concern relevance.

No; Google have always been quite open about the fact that their rankings are a proprietary mix of numerous different metrics (I seem to recall hearing the number 20 bandied about recently), some of which it does not even identify, let alone quantify. Adding encrypedness to that list is wholly in keeping with their previous behaviour.

A generous interpretation would be that Google interprets its mission as guiding users towards the sites that they will find most advantageous — which certainly means, other things being equal, encrypted sites ahead of those where user behaviour can easily be snooped on.

Nevertheless, I do share your more general uneasiness that a corporation like Google can wield such leverage over businesses that it has no direct relationship with. I’m just glad that, on this occasion at least, it’s using that power for good.

On the other hand, one could argue that by directing traffic at sites at all, Google is doing businesses a favour for no direct recompense. That being so, it’s surely up to Google how they choose to do that. Companies that don’t like the free service Google is giving them are not really in a great position to complain.

What is disturbing about all this is that it is generally not appreciated that Google is making cultural decisions in the name of technological elegance. I doubt Google understands it either or if they do understand it, they don’t see why it is important.

I’m sure they understand the issue a great deal better than you or I; and better understand the implications, too. The problem here isn’t that they don’t know what they’re doing; it’s that they do, but we have our reservations about it.

How can anything worthwhile be more important than the speed of the Internet, the handshake between two machines on opposite sides of the planet?

For many if not most people, security is more important than speed. Wouldn’t you prefer a postal service that delivers sealed mail in two days over one that delivers in one day but opens all the envelopes and reads the contents?

Last I knew the ranking algorithm had over 160 component algorithms and this was years ago. The original core algorithm was that a high ranked site was the link target of a lot of sites which were themselves the link targets of of a lot of sites. This was called authority, not relevance. Ranking documents is similar but different. Keep in mind that Google’s R&D budget is several billion dollars a year. (I also have developed search algorithms.)

The great success of Google’s search algorithm tells us something important about the underlying structure of human thought and language. What that is, is not yet clear.

A great and very important essay, Joe. It brings to mind Google’s role in Creative Commons, which is outlined in the book “Free Ride: How Digital Parasites are Destroying the Culture Business, and How the Culture Business Can Fight Back” by Robert Levine, which I reviewed a couple of years ago here: http://scholarlykitchen.sspnet.org/2012/07/26/review-and-discussion-free-ride-how-digital-parasites-are-destroying-the-culture-business-and-how-the-culture-business-can-fight-back/

As Levine writes:

“Google has as much interest in free online media as General Motors does in cheap gasoline. That’s why the company spends millions of dollars lobbying to weaken copyright.”

In addition to imposing their preferences on the Web and on content providers, Google is also funding initiatives that ease their path to domination. Creative Commons is one of these. It’s worth registering that again in light of this essay. Creative Commons is largely a Google-backed initiative, it seems. It is a clever way for them to weaken copyright — and when you think about it, CC may not make sense without Google: http://scholarlykitchen.sspnet.org/2014/04/02/does-creative-commons-make-sense/

Do you at least agree that Google search is a marvel that we are happy to have? Along the lines of the mass produced car?

I do enjoy their product. However, whether we need to subsidize it or strike a better bargain is another matter. Google’s profits are remarkable, especially in contrast to the content providers’ fates, which are going in the other direction. The divergences are quite possibly related.

Are you suggesting (A) that we do subsidise it and ought not to, or (B) that we don’t but should? For myself, I can’t see that we do or should, but I’d to understand what your position it.

By the way, we do all tend rather to fall into the trap of assuming Google is the only web-search engine in existence, which is of course nowhere close to true. Even the BBC pervasively does this, for example reporting the recent European “right to be forgotten” ruling as something that means Google have to remove entries from their index — as though Google search is the only way of finding things.

Back in the early days of my web-programming career, Yahoo and AltaVista dominated the web-search market, and it was hard to imagine them being supplanted. Which of course they were — by Google. With that experience in the store, it’s not so hard to now imagine someone new coming along and supplanting Google.

I’m a big fan of DuckDuckGo (http://www.duckduckgo.com). They don’t track you, and unlike Google, they don’t try to tailor your search results based on what you’ve searched for and clicked on in the past. Rather, they try to give you the most relevant results each time. This is important to me as I want objective and new information each time I search, rather than an attempt to reinforce what I already know and believe.

I just entered “Michael Brown” into DuckDuckGo and Google. DuckDuckGo had NOT ONE reference to the Michael Brown shot and killed by the police over the weekend. On Google four of the first ten references the murdered man and a list of alternative searches just below the first ten provide a link to a host of others.

Expediency, and access to breaking news is indeed a weakness of DuckDuckGo. For some purposes, their search results are just as good as Google’s, for others they lag behind. One must remember that the company is fairly new, and unlike Google, they don’t have billions in revenue to spend on development. One has to weigh one’s needs against Google’s negatives in choosing what best fits one’s needs.

Content sites do subsidize Google by paying for services that make it easier for Google to crawl our content and paying to make modifications with every engineering change Google imposes or develops. They don’t pay us to modify our sites. We make it essentially free for Google to do what they want, paying the freight to modify the infrastructure they use as they see fit. That’s a de facto subsidy. I don’t think it’s a sustainable situation.

As for competition, Google may have crossed into that hallowed zone of “too big to fail.”

Content sites do subsidize Google by paying for services that make it easier for Google to crawl our content.

That sounds more like you’re taking advantage of the free service that Google provides you (and which also benefits them). This is not subsidy: it’s symbiosis.

You’re perfectly free to exclude the Googlebot from crawling your site if this “subsidy” is so unfair.

I have yet to see any evidence that CC weakens copyright, since it is 100% in accordance with copyright law around the world. It simply gives those who don’t wish to exercise all the rights they get under copyright law an easy means to achieve that aim.

That’s true to the degree that authors’ adoption of CC licensing remains entirely voluntary. But in the scholarly world, authors are under increasingly coercive pressure to adopt it. The functional abdication of copyright is increasingly being promoted as an essential component of OA — according to the DOAJ’s new inclusion criteria, for example, no journal will be listed in the directory if it allows authors to use anything other than CC-BY or CC-BY-NC licenses. Elsewhere, the pressure is less structural and more rhetorical — but still significant and growing.

I can never understand why people find this surprising or objectionable. Researchers are paid by governments and charities to do their work. Why would those governments and charities not dictate that publication is done in the way that maximises the return on their investment? Who would anyone think that the piper doesn’t have the right to call the tune?

First, it should probably be pointed out that your comment reflects a certain myopia common to researchers in well-funded fields. Not every researcher is grant funded, or funded at all. Some fields see little to no funding from charities and governments. The number of researchers working for private industry and governments vastly exceeds those working in academia, and certainly is much greater than the number in academia working under lucrative grant funding. So demands made upon researchers should be put in context.

What is likely confusing for most thinking about this though, is how liberal those charities and governments are when it comes to the intellectual property rights on the discoveries coming from the research they fund. Nearly every such group leaves IP rights with the researcher and their institution. So the principle that the IP surrounding the story written about the research must be taken away while the research discovery itself can be locked behind a patent paywall seems contradictory, if not altogether absurd. The costs to society of paying for patented prescription drugs alone vastly dwarfs the amount of revenue brought in by scholarly journal articles.

I’ve written about this extensively in the past (here, here, and here), but to summarize, there is great confusion between reusing the ideas in a paper and reusing the specific words and images paper itself. You can do the former regardless of copyright. Reusing a specific set of words is of limited value (at least in relative terms), so it remains confusing to see so much sturm und drang put into this while the research itself remains seemingly untouchable. So many are fighting to allow you to freely read about and copy the words used to describe a new cure for cancer, but seem perfectly happy to leave that cure for cancer to be locked up by a pharma company charging exhorbitant fees.

Eliminating copyright may serve some positive purposes, but it increases the financial burden put directly on the research community. By employing business models that bring in funds from companies and individuals outside of the research community, we can reduce prices for subscriptions or article processing charges. If we are forced to eliminate these alternative revenue streams, then the researcher is left to carry a heavier load.

You are often quick to accuse people of myopia or confusion when they don’t agree with you.

Not every researcher is grant funded, or funded at all. Some fields see little to no funding from charities and governments. The number of researchers working for private industry and governments vastly exceeds those working in academia.

Your “researchers working for … governments” are different from my “researchers … paid by governments” how?

The issue is not “lucrative grants”. It’s salaries. A researcher, academic or not, funded by a government or a charity — both of which exist to benefit society — is working to benefit society. (Researchers working for private companies are a different matter, of course — we’re not talking about them, and indeed they hardly come into the conversation since they often don’t publish at all.)

On patents, I imagine that you and I have rather similar stances; but that’s not what we’ve been talking about here (which is not particularly surprising given that we’re on a blog that’s all about publishing). I do think patent reform is an important issue, but it’s not one that I feel sufficiently informed about to hold a strong opinion. I would certainly welcome it if you were to push as strongly on that issue as I do on reforming publishing!

I accuse people of myopia when they assume that all fields of research work just like their own field. When someone say, “Researchers are paid by governments and charities to do their work,” they are clearly unaware of the enormous number of researchers not paid by governments or charities to do their work.

Many fields receive no government or charity funding. Many researchers are employees of private universities, and are paid a salary and have their research costs paid by those private institutions. As one example, those who do research in the humanities and social sciences require the retention of copyright to their work because very often they intend to turn that work into popular books. No copyright, no book. And personally, I think the world is a better place with books based on academic research available to the general public.

I exclude government employed researchers because, at least here in the US, their research papers are not subject to copyright and have long been in the public domain, so they are not relevant to increasing demands for CC licenses.

I do think patent reform is an important issue, but it’s not one that I feel sufficiently informed about to hold a strong opinion. I would certainly welcome it if you were to push as strongly on that issue as I do on reforming publishing!

I’ve been harping on it for years (see the articles linked above). Really the way to do things would be to revamp the very notion of the research grant and turn it into a contract between the researcher and the funding body. The researcher would have to sign on as a work-for-hire employee for the funder, and then any and all IP generated while under that contract would belong to the funder.

I can’t see researchers or research institutions ever agreeing to this. It would mean reduced revenues for institutions and researchers that successfully do technology transfer (over $100M per year to the cash-strapped UC system as one example). It would mean an end to researchers starting their own spin-off companies to develop their research into products. It would also potentially limit the development of new drugs from funded research as most pharma companies won’t bother with the expensive development process unless they can secure exclusive rights to the resulting product.

There’s something of a NIMBY mindset one encounters around this subject (“Not In My Back Yard”), where the philosophy of access to research is really important up to the point where it harms one’s own potential earnings, and there a line is drawn in the sand.

I’m still not clear on how “researchers working for … governments” are different from “researchers … paid by governments”, but let it pass.

Really, the issue isn’t who funds research, it’s that someone funds it — if not with grants an cyclotrons, then with salaries. Someone pays researchers to create new knowledge, yet one still sometimes hears researchers talking as though it’s a terrible shock to find that they are supposed to benefit the wider world, and behaving as though the world owes them not just a living, but the exact living that they happen to prefer, for the benefit of them and few of their close friends.

Someone pays researchers to research, whether government, charity, private university or what have you. That someone has a legitimate moral right to tell those researchers how to do they job they pay them to do. That’s all.

Keep up the good work regarding patents.

But clearly the universities, both public and private, have spoken. They want to profit from locking up the IP they discover with patents, rather than freely sharing it with the world. They are exercising their moral right as you suggest and clearly they do not agree with you that the end goal is to benefit the wider world (or at least that giving everything away is not the best route to doing so).

There may be nothing we can do about private universities. Public universities, to the extent that they are publicly funded, exist to benefit the public. They can and should be required to deploy their outputs in such a way as to maximally benefit that public. I hope that this much, at least, is uncontroversial.

(The question of whether giving everything away is the best route to benefit the public is interesting, but a side-question deserving much more attention than I can give it right now. Maybe you’d find it interesting to blog separately about this some day. I suspect the conclusions might be different for copyrights and patents.)

But the public universities, and their researchers have spoken and continue to file patents and sell the rights to them. Similarly, the US government has spoken, and passed the Bayh-Dole act to encourage this behavior. The RCUK, the Wellcome Trust, HHMI, all have spoken and granted IP rights to the their funded researchers.

(The question of whether giving everything away is the best route to benefit the public is interesting, but a side-question deserving much more attention than I can give it right now. Maybe you’d find it interesting to blog separately about this some day. I suspect the conclusions might be different for copyrights and patents.)

I wrote about this here:
http://scholarlykitchen.sspnet.org/2013/08/06/is-access-to-the-research-paper-the-same-thing-as-access-to-the-research-results/

It may be true that some authors are under pressure to adopt OA, but this misses my point entirely. It is legal to buy a motor car, but say increasingly people choose not to. That is not in any way weakening the legality of buying cars, it simply reduces the demand for something that is legal. To say OA, where authors choose not to exercise their rights they could have, is weakening copyright is nonsense in the same way. OA potentially weaken the publishing companies that have built their business model on copyright, just as if people stopped buying cars it would weaken the car industry. So please drop this nonsense about OA weakening copyright. It potentially weakens copyright-based publishing industries. If those publishers cannot adapt their business model to the new reality, they will fail – but that’s the market at work!

I don’t share the view of many that CC or OA is weakening copyright. But your comment about “the market at work” is wrong. Both CC and OA exist ourside the market, CC with tax-free donations (and no need to make its way in the marketplace) and OA in many instances with mandates and funding from governments. I am far from a free-market purist, but it is truly a stretch to call these things market forces.

Tax-deductable donation is one way the government encourages people to invest in certain things. It’s all part of the current marketplace. Government does this in all sorts of ways, e.g., imposing car speed limits, which in turn affects sales of high performance cars to some extent. The market adjusts accordingly. What publishers have to do is adjust their offerings and business models to the new reality, and stop complaining.

You say “Google Web Search (that is, the service that people mean when they say “Google”), only finds things where access is unimpeded”.

You’re complaining that Google only index that they have access to. Is that correct?

In fact Google search does understand licensing. When using image search you can essentially choose to restrict the images to only published under a certain license (be that CC-BY-SA or CC-BY-NC or whatever).

I assumed Google’s upranking of https sites was basically a logical endpoint to the whole NSA surveillance business. There’s a bunch of engineers saying – un-encrypted traffic can be intercepted and used, so end-to-end encryption solves that problem. There’s a much bigger point here about the protection of the trail we leave as we transit the Internet. It touches all sorts of points of law and it seems to me that our laws are increasingly inadequate to the task of providing a behavoural framework for society to operate within, to the benefit of all. Google is acting as a nation state…

I agree, up to the last sentence. Google does not create the condition you complain of. That is the Internet, which is somewhat analogous to a global nation state.

Possibly, but the concept is still meaningful because nation states have certain central properties which the Internet might be seen to have.

It is with noting that given the complexity of the search algorithm the preference for encrypted sites may be quite small in the ranking of sites. Just as with Google Scholar’s use of citations in ranking journal articles.

I used to be a little crabby about an algorithm based on the number of other sites that linked to the ranked site because I teach and that criterion always put Wikipedia at the top of the list and my students didn’t seem to know that being at the top did not reflect writing quality or level of expertise. But then I read Eli Pariser’s “The Filter Bubble” and really got nervous.

This post, which was great, just adds to my anxiety that what we learn from the Web is increasingly mediated by Google, which may or may not have the public’s interest at heart, no matter what the company’s rhetoric, (although, admittedly in this case, encryption seems an important factor to include as a ranking criterion.)

And given Google’s dominance–last figure I saw gave it 67% of all searches conducted–I can’t imagine how to make the process of searching for information less skewed in favor of Google’s interests, since I do believe that’s the primary criterion for ranking factors, not the goal of making high quality information more accessible to bigger and bigger numbers of people while maintaining their privacy on the Web.

A big thank you to Kent Anderson for access to his excellent review of “Free Ride,” which I immediately ordered, god help me, from another scary giant, Amazon. And may I recommend in return Astra Taylor’s “The People’s Platform,” which also gives the lie to the notion of the Internet allowing artists more access to the public and, thereby helping them finance their work. The access is there, the money is not because apparently art, like information, should be free.

Just for yucks, I quote here Taylor’s citation of Larry Page and Sergey Brin on the subject of what a really great search engine should do, when they were first starting out: “We expect the advertising-funded search engines will be inherently biased toward the advertisers and away from the needs of the consumers…The better the search engine is, the fewer advertisements will be needed for the consumer to find what they want…We believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.” Amen to that.

Given the structure of the search algorithm I have to doubt that the primary criterion for ranking factors is Google’s interests. Unless you mean the paid sites they show, which are clearly labeled as such. Advertising per se is not a wrong.

But advertising is indicative of where Google’s true interests lie. They are an advertising company, period. Advertising revenue makes up nearly the entirety of what they bring in each year, and their decisions, as a publicly held company, are increasingly moving toward favoring their real customers (advertisers) over their users. As the famous adage goes, if you’re not paying for the service, you’re not the customer, you’re the product being sold.

Changes in Google’s design over the years shows this clearly, one example here:
http://arstechnica.com/business/2013/10/new-banner-ads-push-actual-google-results-to-bottom-12-of-the-screen/
Would you argue that it is in user interests to have actual search results increasingly de-emphasized and pushed off of the results page in favor of paid advertisements?

I’m not fond of the progressive advert-saturation of Google result pages. But not to fear: search engines operate in a market. When Google reduce their value by enough, some competitor will benefit.

It’s a nice thought, but as history has shown us, dislodging a monopoly is often a very difficult and slow process.

I have no opinion as to what is in the user’s interest in this case, not having studied it. The ads may well be more helpful than the search results, or maybe not. My point was directed specifically to the search algorithm, not the page design, which is a different issue. The claim was that somehow the search algorithm per se favored Google’s interests and I doubt that, but I am quite prepared to listen to evidence to the contrary. It is all about the math.

Since none of us are privvy to the algorithm, it’s hard to know exactly what it does. But we do know that it favors Google’s own products (Google+, and its travel services and the like) over competitors, despite those competitors often offering better information. Would it surprise anyone if we found out that those companies that spend lots of money advertising with Google see their results given a boost in ranking?

Anything is possible but possibility is not evidence and I have seen no evidence of this. What kind of boost are we taking about? Putting them on the first page of hits would be relatively easy to detect, perhaps even obvious. It would also show up in their promotional literature, or else why do it? If the advertisers do not know about it then it is degrading the algorithm for nothing.

Even the FTC found that Google doesn’t play fair, they just chose not to file a complaint against them:
http://www.huffingtonpost.com/2013/01/03/google-antitrust-settlement-ftc_n_2404721.html

The matter continues in Europe though, and many continue to present evidence of Google skewing search results toward their own properties (and favored partner companies):
http://techcrunch.com/2014/07/09/yelp-google-anti-trust/
http://www.precursorblog.com/?q=content/new-evidence-google-search-bias-its-relevant-doj-investigation-google-yahoo-ad-deal
http://www.infowars.com/google-allegedly-manipulating-search-results-favoring-big-brands-over-small-businesses/

Google rose to the top of the search engine business because its ranking algorithm produced the most useful results. Tamper with the rankings in a way that produces less useful results, and some users will try a search engine comprising the other 1/3 of the market. Wanna bet Google watches for this effect?

What a company must do to rise to a monopoly level dominator, and what that company has the freedom to do once it completely dominates a market, are quite different things.

Great writing. I’m ready for Joe’s opinion on how this all relates to the age old “why Johnny can’t write” meme. From one cyborg to another, huzzah!

Comments are closed.