As we move further into a socially networked world, we learn more about both the value and the shortcomings of social technology approaches. From the crowdsourced investigation into the Boston Marathon bombing to restaurant reviews on Yelp, behavioral norms are solidifying, and a growing body of evidence is presented for our analysis. Framed through the lens of scholarly publishing, a picture emerges of the inherent conflicts between egalitarianism and expertise.
Despite the often overheated rhetoric of his argument, anyone working in scholarly publishing for the last decade will recognize a grain of truth in Evgeny Morozov’s diatribes against what he calls “solutionism”:
. . . an intellectual pathology that recognizes problems as problems based on just one criterion: whether they are “solvable” with a nice and clean technological solution at our disposal. . . . Given Silicon Valley’s digital hammers, all problems start looking like nails, and all solutions like apps.
I’m not sure how much of this approach is really an ingrained belief system as much as it is a business strategy. But ask any journal publisher, and they’ll likely tell you about the weekly, if not daily approaches from would-be technology entrepreneurs — here’s my new technology that’s going to radically remake your business, please either buy it, or give me free access to all of your assets and customers so I can make my fortune.
Indeed, there are exciting new technologies on the rise, but it’s often difficult to separate the wheat from the chaff. The notion of “change for the sake of change” can be fraught with peril for the not-for-profit press. Many simply don’t have the funds to make the sorts of constant mistakes a company like Google can afford (or more importantly, they’re part of research institutions and societies that can instead put those funds to more worthy causes, such as funding research and curing disease). As such, it is vital to carefully pick and choose the most promising technological revolutions.
Part of the technology sales pitch for the last several years has been a seemingly unlimited faith in social approaches to nearly every aspect of academic research and publishing. But as Jaron Lanier recently said, “being an absolutist is a certain way to become a failed technologist.” A better understanding of crowdsourcing and other social approaches can help us better target its implementation.
The two common functions offered by crowdsourcing are the distribution of large sets of work, and the democratization of opinion gathering.
Cognitive Surplus In Action
Done right, it’s a fantastic use of what Clay Shirky calls “cognitive surplus” – a souped-up phrase for “time when we don’t have something better to do”.
Cognitive surplus has given us Wikipedia, written by thousands of people who might have done nothing more than correct a grammatical error or insert a fact, yet have created a towering resource. It’s given us GalaxyZoo, in which non-expert users have classified hundreds of galaxies so that professional astronomers don’t have to and can figure out how they evolve.
Cognitive surplus, or even surplus computing cycles can offer value in scientific research, as projects like GalaxyZoo, Folding@home and Seti@home readily demonstrate. But for the most part, this is a brute force approach, churning through the drudgery of large data sets, the sort of work that Sydney Brenner famously quipped should be parceled out to prisoners, with the most work given to the worst offenders.
That’s an important limitation to recognize. These approaches aren’t about creating new insights or making intellectual leaps — they’re about sifting through large amounts of busywork so the real work of scientists can begin.
We received a painful lesson in this limitation in Boston, where well-intentioned but ultimately fruitless crimesolving efforts by web communities like Reddit and 4Chan tried to substitute crowdsourcing for expertise. The groups produced noise rather than signal, and ended up going down dark avenues of mob mentality. The case was, perhaps unsurprisingly, solved by experts using specialized tools and approaches rather than the brute force approach by the online public.
That’s an apt metaphor for scholarly research. Science doesn’t work in a egalitarian fashion. The questions in Boston, much like most scientific questions, required specialized training, experience, and expertise. You can’t crowdsource a Richard Feynman or a Barbara McClintock. We rely on these sorts of brilliant individuals to evaluate and understand what’s truly important and meaningful.
The Voice of the Crowd
The other great benefit of crowdsourcing comes from letting anyone and everyone have a voice. We have quickly come to rely on user reviews for nearly everything, from purchasing a washing machine to finding a good restaurant in a new town (though hopefully you haven’t had much use for this sort of review).
The same questions of expertise versus democratization occur here as well. We rely on experts for things like peer review. Is this paper accurate? Is it meaningful and useful? Study sections made up of experts often determine distribution of research grant funds. Are these questions that instead should be crowdsourced and put to a popular vote? Should a bottom-up approach replace the top-down system in place?
Many altmetrics approaches are based on measuring popularity of articles rather than quality — how many Facebook likes did the article gather, rather than asking whether it drove further science. Kickstarter is being bandied about as a model for science funding. Peer review has become something of a whipping boy, and we regularly hear of plans to scrap it altogether in favor of letting the crowd have its say post-publication.
Are these sorts of approaches really appropriate for science? Science is not about what’s popular or well-liked. It’s about what’s accurate and true. Being nice or fair doesn’t enter into it, and truth is not subject to a vote.
There are many people out there who believe that vaccines are dangerous — what happens to my tenure bid if this group votes my pro-vaccine paper down? Would anyone dare work on a controversial topic in a system where following and conformity are the favored results?
Aside from the obvious questions of gaming the system and clear inefficiencies and time sinks created, the behavioral norms that have emerged from social rating systems offer fascinating glimpses into the psychology of participation and the potential pitfalls for their use in science:
It is precisely in this vast range of online activity where the value and interest lie for researchers investigating what is not actually known as “criticism” but, rather, “electronic word of mouth.” The trove of data generated from online reviews, the thinking goes, may offer quantitative insight into a perpetually elusive dynamic: the formation of judgments, the expression of preferences, the mechanics of taste. The results, filled with subtle biases and conformity effects, are not always pretty.
Should taste and opinion enter into the official context of how a researcher’s work is judged? I’m not sure they can be entirely avoided, but our current system (in the hands of a good editor) opts for a small number of informed opinions over a potentially large number of less-informed opinions. Finding the one right peer reviewer for a paper is more revelatory than finding lots and lots of reviewers without the same expertise.
Rating and review systems show common phenomena, things like “authority signaling” and “petty despotism.” Cults of personality form, where reviews are based more on community than on quality. The system itself begins to hold a major influence, as early reviews often become the default authority on a subject, and over time, people respond more to the reviews than to the object being reviewed.
Think it doesn’t happen in science? Here’s a recent example from PLoS ONE, which has a discussion of the vocabulary and tone of the first comment that’s twice the size of the actual comments on the paper itself.
All of these behaviors are fascinating and provide ample fodder for the next generation of social scientists. But they offer potential distortions of the goals the systems are trying to achieve.
The key question then is whether egalitarianism really the right approach for areas where we are striving for authoritative expertise. Can crowdsourcing drive excellence?
We must always separate the technology from the sales pitch. Finding the right tool for the right job is essential to success, and a key part of the publisher’s role. Social approaches can offer enormous value, but they must occur in the right context for the results offered. It may not seem fair to everyone, but not everything in this world should be put to a vote — even if we have the technology to make it so.