Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.–Jeff Goldblum, Jurassic Park
We’re at the end of an era where we’ve all drunk the Silicon Valley Kool-Aid about how everything digital is a revolution that is good for society, and we’re finally starting to take a serious look at where this particular flavor of connectedness has led us. Facebook, Google and the many other social media channels do indeed offer us much good, but their business models, mostly built around surveillance and ad selling, are having serious consequences. Between Altmetrics and Scholarly Collaboration Networks (SCNs), academia has started down the same path as the rest of society. Is this really where we want to go?
A recent article in the London Review of Books does a superb job of clearly delineating just exactly what Facebook is and I highly recommend reading the whole (long) article. Key points focus on their business model, and how this drives how the site works:
What this means is that even more than it is in the advertising business, Facebook is in the surveillance business. Facebook, in fact, is the biggest surveillance-based enterprise in the history of mankind. It knows far, far more about you than the most intrusive government has ever known about its citizens. It’s amazing that people haven’t really understood this about the company. I’ve spent time thinking about Facebook, and the thing I keep coming back to is that its users don’t realise what it is the company does. What Facebook does is watch you, and then use what it knows about you and your behaviour to sell ads. I’m not sure there has ever been a more complete disconnect between what a company says it does – ‘connect’, ‘build communities’ – and the commercial reality.
The need to serve their true, paying customers — the advertisers — means that what you see on this sort of social network is not driven by an algorithm that looks for the things important to you, but the content that is most likely to serve the needs of those advertisers:
Note that the company’s knowledge about its users isn’t used merely to target ads but to shape the flow of news to them. Since there is so much content posted on the site, the algorithms used to filter and direct that content are the thing that determines what you see: people think their news feed is largely to do with their friends and interests, and it sort of is, with the crucial proviso that it is their friends and interests as mediated by the commercial interests of Facebook. Your eyes are directed towards the place where they are most valuable for Facebook.
This means that fraudulent content isn’t really a problem for Facebook, as long as it drives ad sales. They’re only starting to take this issue seriously because of the increasing public recognition of this, and the potential that a loss of trust could decrease the amount of time people spend on the site viewing advertisements.
Google similarly offers results that favor its own business purposes over offering the best possible answers to the queries it receives. Microsoft researcher Dana Boyd has been tracking how this creates incentives that actively change content:
They [hackers] also learned how to game social media, manipulate its algorithms, and mess with the incentive structure of both old and new media enterprises. They weren’t alone. I watched teenagers throw brand names and Buzzfeed links into their Facebook posts to increase the likelihood that their friends would see their posts in their News Feed.
As artificial intelligence (AI) continues to develop, Boyd believes things are going to get much worse:
If you don’t have a structure in place for strategically grappling with how those with an agenda might try to route around your best laid plans, you’re vulnerable. This isn’t about accidental or natural content. It’s not even about culturally biased data. This is about strategically gamified content injected into systems by people who are trying to guess what you’ll do.
With that in mind, perhaps we need to rethink the growing reliance on the use of these advertising-based ventures in scholarly communication. Although I’m not aware of any institution or funder using Altmetric scores as a serious driver of hiring, promotion or funding decisions, their profile continues to rise. Recent articles on The Scholarly Kitchen have taken a hard look at the use of citations as the basis for metrics (here and here). While both authors ask important questions on just what exactly a citation means, it is at least clear that a citation at least means something, some measure of effect felt by the scholarly author community. Can the same be said for a Facebook “like” if its core purpose is driving advertising revenue? If a retweet is not an endorsement, then what is it?
As is the case with Facebook, this creates incentives that are at odds with the best interests of their users, who, once again, should not be confused with either site’s real customers.
The growing use of SCNs, copyright issues aside, is equally troubling. The current business models available for networks that hope to survive outside of just being a feature of some other company’s product, are all based around surveillance and advertising. ResearchGate and Academia.edu want to spy on users to use that data to promote ad sales (or to sell that surveillance data to anyone interested, if such a market exists). As is the case with Facebook, this creates incentives that are at odds with the best interests of their users, who, once again, should not be confused with either site’s real customers.
Do you want your scholarly reading material being chosen based on serving advertiser’s needs? We know Twitter and Facebook have been used to target particular populations and sway their opinions. Will we end up gamifying scholarly articles, including mentions of particular products or ideas in our papers in order to increase our likelihood of visibility and impact?
None of this sounds like a very good plan for the future of scholarly communication.
How else then, can we get the communication offered and the benefits of digital networks and tools without falling prey to serving someone else’s agenda? Jonathan Margolis, writing in the Financial Times, suggests that the solution could be a move toward a paid search or social network product, making the user the actual customer whose needs are to be met by the service. Academia.edu (always worth pointing out, not a true .edu but a for-profit business that bought the URL from someone else) has tried several times to create a “freemium” service, where paid users could upgrade what they get for a fee, even including content apparently licensed from Encyclopedia Britannica. Researchers, at least so far, don’t seem all that impressed with the offering. This fits with Margolis’ conclusion, that we’re simply not there yet:
…even if private searching and social media would not have enough momentum to succeed now, if the big beasts of the internet continue to dial up the creepiness, the idea’s day may yet come.
I suspect that the allure of “free”, essentially something-for-nothing, remains difficult to overcome, particularly because we’ve been so conditioned to expect it from everything online. Since it is increasingly clear that this is illusory, and that there are costs for all these “free” services, the question is whether bending to the will of advertisers is a price the scholarly world is willing to pay. The subscription model may not be the ultimate answer (though it has clearly become the choice du jour for the music and film/tv industries), but we need to be aware of the consequences of relying on the surveillance/advertising model, and the impact it can have on our perception the truth.