The Internet was supposed to revolutionize publishing, and yet we still have journals, still have publishers (only fewer of them), and peer-review appears to be alive and well. There have been major developments as content moves online, but these incremental changes look more like the horseless carriage and less like the French Revolution. There has been no Storming of the Bastille in scholarly publishing.
The question I’d like to tackle in this blog post is why academics continue to spend their time, energy, and frustration getting their work published in reputable journals when they can distribute them more quickly and widely in a networked environment.
There is no easy answer to this question, and before I attempt to answer it, we have to have to go back — way back — to 1970.
In 1970, economist George Akerlof published an article entitled “The Market for Lemons,” a paper that will later help earn him a Nobel Prize in Economics. The article was about the used car market and what happens when buyers and sellers have different information. In other words, it was about markets with asymmetric information.
Economics doesn’t always look like the real world. Classical economics is based upon perfect information, which means that everyone has a complete and current picture of what everyone else is doing. Its a nice idea for creating simple models, but not very realistic. Some authors have tried to extend this classical notion to the scholarly publishing market, considering articles to be commodities that can be traded on the free market. As Kent Anderson recently wrote, this analogy is about as helpful as a train wreck.
The market idea, however, was on the right track, and we can consider the market for scholarly articles as having the following five properties:
- The market is huge (published articles number in the tens of millions)
- There is a immense variability in the quality of articles
- Most authors (as sellers) are not in the market long enough to build a reputation for quality
- Readers (as consumers) have a limited amount of attention, and
- The true quality of an article cannot be known until a paper has been read
Still with me? Let’s now look at what this means for readers and authors. As a reader, I’m interested in reading relevant, high-quality articles. Because I often do not know the reputation of the author, I seek out quality signals that inform me whether an article is likely worth my while reading. When these quality signals prove to be correct time and again, they become heuristics, or simple rules-of-thumb. One of the strongest signals of quality for the reader is the journal in which the article is published.
As an author, I’m conscious that potential readers may not know me from Adam, and since there are so many trashy articles out there, I want to signal that mine is better. I’m willing therefore to invest time and resources — time and resources that could be put into doing more writing — into certifying that my work is high-quality. The strongest signal of quality for an author is the journal that accepts and publishes the article.
As I wrote in a previous post, there is evidence that some top authors — those who have already built a reputation for quality — are bypassing the lengthy peer-review process and seeking other forms of distribution. These are individuals who broadcast their own quality signal, and therefore do not require another institution to do it for them.
In returning to the question posed earlier, why do academics continue to seek traditional publication outlets for their work, we can starting thinking of publishers not as vehicles for distribution and dissemination of scholarly articles but as institutions that certify quality and broadcast these quality signals to a market of potential readers. This system is not intended to be fair and democratic, but it saves the time of the reader and functions to help consensus-building in science.
For those who feel that this perpetuates hegemony, let them eat cake.