Are journal editors an anachronism? A throwback to an age of print publishing that no longer exists? An institution, like the British monarchy, that continues to exist more for symbolism than for functionality? An institution whose purpose is to perpetuate an unfair power relationship with authors and readers?
If you’ve been reading the Guardian recently, you’ll have noted a recurring revolutionary theme: publishing must be taken back from editors and the institutions that help to maintain their social standing — journals and publishers — and returned rightfully to the people.
Luckily, this kind of revolution doesn’t require a call to arms, but the simple use of the disruptive tools of the Internet, which are available to us all. As David Colquhoun proposed recently:
Publish your paper yourself on the web and open the comments. This sort of post-publication review would reduce costs enormously, and the results would be open for anyone to read without paying. It would also destroy the hegemony of half a dozen high-status journals.
Would self-publishing, along with the defenestration of a few prized editors, really do the trick to revolutionize science? Or does Colquhoun fail to see the value of editors and their journals? In this blog post, I’m going to argue why we still need them both and — perhaps even more than ever before.
Information overload isn’t just a receiver problem — We all understand the attention economy pretty well: too much to read, too little time, too many distractions. Many call this situation “information overload,” and the proposed solution has, almost entirely, focused on the receiving end of the equation — the reader.
Unfortunately, this fixation on the receiver, promoted by people like Clay Shirky in catchy phrases like, “It’s not information overload. It’s filter failure,” only reinforces this one-sided view.
Indeed, the very phrase “information overload” implies that the problem and solution rests with the receiver. For this reason, I’m only going to refer to “hyperinformation,” the overabundance of information.
By focusing on the receiver as the locus of the problem and its solution we run into some publishing paradoxes that cannot be explained, such as:
- Why do authors still use the journal system when they can reach readers directly and do so much faster and cheaper themselves?
- At a time when we can disaggregate all of the functions of journals into service components, why do journals still persist in a mostly unaltered state?
- Why have repository publishing and overlay journals failed to get beyond the conceptual stage?
- Why has post-publication review also failed to gain serious uptake?
I will argue that hyperinformation is a problem of quality signaling between senders (authors) and receivers (or readers), and contend that the role of editors in mediating these signals becomes enhanced — not diminished — in an expanding world of information.
The Role of Journals in a Quality Signaling Market — First, let’s think of scholarly communication not as a linear transfer of information from sender to receiver, but as a two-sided market with authors at one end, readers on the other, and the publisher forming the intermediary between the two.
Second, let’s understand that authors are also readers, and that many readers are also authors.
Now let’s detail the assumptions about what we already know of this system:
- There is too much to read and too little time.
- Quality is heterogeneous across papers. Meaning, there are a few very excellent papers, lots of good papers, and a huge number of mediocre and lousy papers.
- Authors have more knowledge about the quality of their paper than potential readers. This is known as “asymmetric information.”
- Readers are active agents that routinely seek out relevant, high-quality sources of information. They are not passive receivers.
- Authors want to be read and acknowledged for their contribution to the scholarly record.
- Building a reputation for publishing quality research takes a long time, and yet
- Most authors appear only once in the scholarly record.
If we agree with all of the stated assumptions, then we can derive the following reader behaviors:
- That, in a large market of academic papers, where quality is heterogeneous, information about quality is asymmetric, and because the vast majority of authors appear only once, active readers will seek out quality signals unrelated to authors which help to identify what is worth their time.
- A reader is not always able to identify quality and often turns to experts, peers, or “institutions of trust” to guide them on what is worth reading.
- Readers will develop heuristics based on previous experiences with an information brand. These brands may be a journal, an author (if known), or an author affiliation (like, belongs to a particular lab or university).
- Quality signals may be amplified by experts like editors.
- And last, as the market gets larger, quality signals becomes more important.
Now let’s take our assumptions and derive the following author behaviors:
- Authors understand what quality signals and heuristics readers use to identify content. Remember that authors are readers, too.
- In attempting to increase the attention given to their papers, authors will seek out certification for their articles in order to send out quality signals to potential readers. This decision is made in spite of the fact that the certification process is slow, costly, and detracts the author from publishing more articles.
- Most authors have a good understanding of the quality of their article and will send it to journals of commensurate quality (with some exceptions noted). This saves them the time of multiple review and rejection.
- High-status authors — those who have established a quality brand over time — can bypass the certification process and communicate directly with readers. They do not require the imprimatur of the journal or publisher to convey their quality signal.
Here is a graphic representation of the signaling market:
From this two-sided signal market, it’s not difficult understand why the journal is still in the center of the scholarly communication process even though digital publishing has allowed its functions to become — in theory — disaggregated. The journal remains the organizing structure within the quality signaling market.
Let me state this more declaratively:
The principal function of the journal is to organize and mediate quality signaling within the author-reader market. The role of the editor is simply to make this happen.
From this signaling perspective, the essential function of peer review is not to improve an article, an argument maintained by journal publishers, but to stratify the literature into tiers of journals that strengthen quality signals for potential readers and therefore make the information seeking process more efficient. The greater the stratification, the more efficient the system.
The quality signaling model explains why repository publishing has failed to get beyond the conceptual stage for the simple reason that repositories are unable to generate clear quality signals to potential readers.
The signaling model can explain particular paradoxes, such as why high-status individuals are able to bypass journal publishing and reach authors directly, by say, posting a manuscript on an preprint server. This is not evidence that such a system of publishing can work for all scholars — as some proponents of preprint services believe — only that it can work for a narrow group of scholars who have already built a reputation for quality.
And last, as community-generated metrics take time to aggregate (citations, comments, and to a lesser extent downloads), they are of little help to guide readers to what has been recently published. They also tend to provide little additional information beyond the quality brand of the journal.
In conclusion, the first step in addressing hyperinformation is to stop thinking of it as receiver problem but as a market problem in which authors compete for the limited attention of readers.
If we view journals as mediators of quality signals in a crowded information space — a space that is getting a little more crowded each year — the future of the journal presents many more opportunities than when it is seen as a mechanism to control the distribution of scientific research.
(Adapted from a talk given at the OUP Journals Day, 15 September 2011)