One of the most despised characters in movies is Jar-Jar Binks, who appeared in the so-called prequels delivered as films 1/4, 2/5, and 3/6 in the Star Wars franchise. Binks dominated the disappointing first movie, “The Phantom Menace.”
That movie had other problems — poor pacing, a boring premise, weird new mythology (midi-chlorians), a wooden child actor, and a director who was clearly indulging himself and infantalizing his audience.
All of these problems may have been eliminated if there had been a strong editor in charge, as there was during the first three Star Wars movies. That editor was George Lucas’ wife at the time. She won an Oscar for her editing of the first (fourth, for you revisionists out there) film. The couple divorced after the completion of “The Return of the Jedi.”
This speculation is underscored by viewing a version of “The Phantom Menace” known as “The Phantom Edit,” where an unknown Hollywood filmmaker made a new cut of the movie, seamlessly eliminating long sections of fluff, ultimately issuing it through back-channels. “The Phantom Edit” is a tight, exciting, engaging movie, and a strong contribution to the Star Wars universe. If “The Phantom Menace” had been released theatrically in this edited form, audiences would not have been disenchanted and disappointed. In fact, they would have been thrilled.
Lucas’ lack of constraints showed up years before, as he issued versions of the original movies containing new digital creatures and enhanced explosions, particularly of the Death Star. Gone was the explosion in the original, with a unique violence and impact generated purely by physical effects. Instead, a less visceral and more elegant digital explosion took its place, cheapening the impact by upgrading the aesthetic, complete with a lovely ring of debris jettisoning out.
We are currently experiencing the same problem with “The Hobbit,” which is one book stretched to three movies, as opposed to its predecessor, “The Lord of the Rings,” which condensed three large books into three large films, and even then left a great deal of good material (“Mouth of Sauron” anyone?) on the cutting room floor. Without editorial controls, artists with power can drift into the self-indulgent.
We now face a lack of constraints in publishing. Once upon a time, printing provided a natural constraint, as editors met to see what merited the space they had available. Now, with no obvious boundaries, we have new possibilities emerging in response — mega-journals, cascading titles, data supplements, and more papers in existing journals, many of these published online-only.
But is this response good? Are we eliminating constraints thoughtfully?
Some claim that selectivity through scarcity became a cultural norm. Others believe the cultural norm exists, and scarcity merely became mistaken for it. In either case, the cultural norm was intermingled with physical constraints. With those constraints vanishing or gone, we’re forced to consider our cultural norms anew.
Are we willing to think again about replacing physical limitations with new constraints, like policies, page budgets, article budgets, or other limits that have nothing to do with the technologies of production but rather are informed by our desire for selectivity and our readers’ needs for relevance and quality?
Some artists in other media are embracing constraints because constraints create better experiences and higher-quality outputs. Television is one area where this is occurring, as new shorter seasons (13 shows instead of the traditional 20-26) are emerging. These short seasons tend to be more interesting, more exciting, and more unified than the more sprawling longer seasons of yore.
A recent article in Forbes celebrates how books evolved to become highly condensed sources of knowledge. The author notes that:
Books are the most radically condensed form of knowledge on the planet. Every hour you spend with [my book] is actually about three years of my life. You just can’t beat numbers like that.
In addition to distilling work into expression in a highly efficient form, books demand concision — from chapters to indices to volumes, books imposed constraints, order, and limits on expression, forcing authors to be brief and organized. Many e-books now suffer from bloat, especially if a professional editor was not involved — chapters that ramble, stories that sag, or scenes that lack structure.
Twitter is another source of useful constraints, as the limited space for expression drives creativity. In addition, newer features (hashtags and embedded links that don’t count toward your character limit) invite structured collaboration, so much so that “hashtag” has emerged into common parlance, as spoofed in this Jimmy Fallon skit:
Concise communication is a natural goal in time-compressed environments. Teenagers and others invented emoticons and other abbreviations to make their time online more efficient. These constraints have driven creativity and saved time.
Many editors are imposing lower word count constraints on authors, cutting the lengths of papers by half in some instances. This is a popular move with readers. In our flagship journal, we are publishing more single-page versions of articles in print with full articles online. These full articles are also shorter than they’ve been in the past, and the editors just moved the word limit down another few hundred at a meeting last year. Concision doesn’t have to be imposed purely by physical barriers. It can be imposed by fiat and policy.
The Journal of Experimental Medicine imposed constraints on data supplements in 2011, stating simply, “Enough Is Enough” in an editorial on July 4 of that year, a date that was perhaps a coincidence, perhaps a message. This was a year after the editor of the Journal of Neuroscience made a similar decision. While constraints along these lines are not making headlines like they once did, they are more common.
We are currently awash in papers, and some people are celebrating the lack of boundaries and lower capacity constraints in the current scholarly publishing ecosystem. This is especially striking as the market tilts more toward author preferences, and as publishers seem to become more servants of authors than servants of readers. In theory, this could be a good thing — everything gets published.
In practice, it’s less clear the benefits of unlimited publishing are possible given our current filtering environment. PLOS ONE published more than 3,000 articles in January 2014. You would have to read more than 95 articles per day to benefit from those directly. Assuming each took 20 minutes to read, it would take you 32 hours to read them all — most of an entire work week. Clearly, that’s not possible for full-time scientists. Even skimming the contents of PLOS ONE’s January output would take hours. Content flowing at that rate is not filtered for use. It is an invitation to filter.
Our filters aren’t necessarily up to snuff yet. Altmetrics is a proxy for the box office of an article, in this analogy. While box office is a type of filter, it’s purely quantitative and only qualitative via some questionable inferences (if you believe popularity = quality). We have yet to create our own version of a Rotten Tomatoes-like experience — a place to scale up a critical consensus that can help readers understand before using an article what that experience might be like. As you may have read, I’m supporting efforts to bring something like this into being, but other solutions are also possible and are sorely needed.
Matching services have been developed to help authors find outlets, but there are fewer services for readers to filter content, despite all sorts of efforts to create them (semantics, heuristics, curation, etc.). Journal brands and titles only go so far, especially when there are so many new brands and titles emerging. Google is perhaps the most useful filter currently, and that’s become a default starting point for many scientists. However, it’s also a bit of a sad statement when a great general search engine has to do for us what we cannot seem to do for ourselves.
Are we facing filter failure? Or just failure to filter? Shouldn’t we have our own “Phantom Edit” of the expanding and endless scholarly communications universe?
Lack of physical boundaries allows a new level of intellectual sprawl, and some see this as a virtue. I am not convinced. Boundaries and constraints signal relevance and drive innovation. Creating new, useful, functional, scalable boundaries requires work — which itself requires invention, investment, and risk. Publishers are made to take risks. That’s what we do. New filters are possible, new editorial models might be right in front of us. Where are the editors of the future? Where are the filters of tomorrow?
And, yes, this post could have been much shorter. Sorry about that. These blogs know no limits.
Discussion
17 Thoughts on "Intellectual Sprawl — The Importance of Constraints on Authors and Other Creators"
At Paperpile we’ve kept a close eye on companies in the area of literature alerts and reference management, and I’m constantly blown away by the number of services available to scientists and doctors for filtering content. To name a few off the top of my head:
– PubMed saved searches (http://www.ncbi.nlm.nih.gov/guide/howto/receive-search-results/)
– Google Scholar alerts (http://googlescholar.blogspot.com/2010/06/google-scholar-alerts.html)
– Mendeley Suggest (http://mystudiouslife.wordpress.com/2012/01/10/mendeley-v1-3-mendeley-suggest/)
– ScienceScape (a new entry to the field backed by some really interesting technology, http://sciencescape.org/about)
– Nowomics (http://nowomics.com/)
– PubChase (https://www.pubchase.com/)
– PubCrawler (http://pubcrawler.gen.tcd.ie/)
– Read by QxMD (aimed at doctors, https://www.readbyqxmd.com/)
– JournalTOCS (more about aggregating than filtering, but along similar lines and heavily used http://www.journaltocs.ac.uk/)
Even the example of PLOS One is an interesting one. Yes, the journal publishes far too many articles for one scientist to keep up with. But the very first link—in huge text right next to the PLOS One logo—is “Subject Areas” which provides an iTunes-like genre chooser to drill down to specific subject areas. Once there (for example I went to Molecular Evolution http://www.plosone.org/browse/molecular_evolution) you can subscribe to automatic email updates and an RSS feed.
That wasn’t too hard at all: 3 or 4 clicks and I’ve narrowed down from 88,000 articles to a “mere” 2,000, and I’ll get an email every time someone publishes in my field of interest. That’s a handful of articles a day — a perfectly manageable amount.
So all in all, I would tend to disagree with the suggestion that we’re facing a complete filter failure! PLOS’ subject areas and alerting system didn’t just come out of thin air, so I suspect they’re putting significant effort into targeting PLOS One articles toward the most relevant readers.
Now, maybe the argument is more along the lines of “there are many pretenders to the throne, but nobody has hit the sweet spot yet” — with which I would wholeheartedly agree. But the filtering problem is one that all scientists and science technology companies are keenly aware of, and many bright people are actively building tools to help the situation.
The one thing that I think academic publishers could do to encourage progress in this area would be to improve the amount of cross-publisher data sharing. Take CrossRef for example: it’s collected an amazing centralized database of citation data from many hundreds of publishers, but it’s missing vital abstract and cited-by information. This makes CrossRef virtually useless as a basis for creating interesting recommendation or filtering services. As a result, it’s no wonder that every single service mentioned above uses either PubMed or a proprietary database (e.g. Google Scholar, or in Mendeley’s case, increasingly Scopus). This unfortunately reduces the number of filtering / discovery options available to researchers in non-biomedical fields, and reduces the opportunity for non-biomedical journals to receive useful, targeted referrals via cross-publisher filtering or recommendation systems.
PLOS ONE has made their filtering system more sophisticated over the past couple of years, which underscores they knew this to be a weakness of their approach. You can get to far more granular information than in year’s past. Then we get to the other part of their filter, which is not designed to tell me if what’s in there is important or novel. In fact, they eschew that part of the filter. So, if I’m a user and I’m not guaranteed to find something that’s novel or useful even if it’s filtered to be more like the information I want, I’m not as incentivized to filter using their tools.
It’s interesting to watch PLOS ONE reinvent the wheel. They still have some distance to close before they have the kind of filtration that has been shown to be historically useful to readers — filtering for relevance, novelty, accuracy, and importance.
But, then again, we all do. To me, we’ve taken our eye off the filtration problem to some degree while enjoying the lack of volume constraints. As you note, coming up with new ways to filter that don’t have unintended limitations is a challenge.
I would also add that given the list of filters you’ve provided, that might also indicate that even filtering the filters has become a challenge.
Indeed Gregory, how readers filter is a very complex business, which involves a lot more than just journal branding. STM has an interesting graph in Figure 25 here: http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf. It shows fourteen different starting points that readers use to find what they want to read. Journals figure into several but in different ways. Discovery is the ultimate filter.
The report you linked to didn’t have 25 figures, but I’d very much like to see the graph you describe. Could you check the link?
Sorry it is figure 15, not 25, on page 40 of http://www.stm-assoc.org/2012_12_11_STM_Report_2012.pdf.
Here is the original study they are citing: http://www.renewtraining.com/How-Readers-Discover-Content-in-Scholarly-Journals-summary-edition.pdf. Interesting stuff.
While it’s true that no sane person would try to read (or probably even skim) all 3,000 papers published by PLoS One in January, I’m not sure that’s a real problem. PLoS One isn’t a publication that you read — it’s a brand that is applied to articles meeting certain criteria. As long as the articles have good metadata, then their usefulness scales really very well: the fact that all of them have the PLoS One brand doesn’t lower their usefulness. If you’re only looking for a limited number of articles on a particular topic, then you should be able to keep your reading experience manageable without much effort, and the fact that the articles you find were originally published as part of a 3,000-unit batch in January of 2014 doesn’t make much difference.
There’s another possible problem, though: maybe so much good science is now being done that it’s not possible to keep up with it, even within narrow disciplines. But if that’s the case, then the problem isn’t a lack of publishing boundaries — it’s humankind’s ability to produce more high-quality information than its individual members can effectively absorb. I’m not sure that filtering out good science is the right solution to that problem. Then again, I don’t know what the right solution would be.
Filtering, to me, doesn’t mean just shutting content out. It means separating it meaningfully. PLOS ONE has acknowledged that it has a problem doing this. The sheer volume of output suggests this problem, as well. Even rising to “a certain standard” — and a standard that’s debatable — isn’t really separating it meaningfully. Even with metadata, it’s really just the beginning of filtering meaningfully.
It also gets to the question of what is a research report versus an addition to data in a field. If we could create clearer reasons for publication (other than just academic credit) and perhaps reward adding data to the field as a substitute to publication, we may get fewer papers and more data. As you note, this is a complex, profound problem, and I think we’re not overcoming inertia in dealing with it.
The points in this article are well taken, but it is true nevertheless that the constraints imposed on book publishing by the print environment did not match up well with the needs of scholarship, not only in limiting length but also in making it economically infeasible to publish short books. It got to a point where some university presses would simply not consider a manuscript longer than 400 pages. At the same time, it was difficult to publish books under 100 pages even though, intellectually, there was a case to be made for writing at that length. (The wonderful long essays of scholars like Albert O. Hirschman come to mind as preeminent examples of this genre.) Now, with ebooks, there has been a renaissance of writing short books, and no longer must a scholar whose large tome would have scared off publishers in earlier times despair that publishing such a work is impossible. Editorial control still needs to be exercised at both ends of this spectrum to ensure quality, but there is something to be said for loosening the constraints that were artificially imposed by print.
For another great example of the value of constraints, compare any BBC Radio4 30-minute documentary (e.g. http://www.bbc.co.uk/radio4/programmes/formats/documentaries/current — tightly edited, structured, high information density) with more-or-less any tech podcast from amateur broadcaster such as those from 5by5 (http://5by5.tv/broadcasts — rambling, wooly, unstructured, low information density).
Teenagers and others invented emoticons and other abbreviations to make their time online more efficient.
This appears to be a novel use of ‘invented’. If one completely ignores the prehistory of modern emoticons, the only conclusion to be had is that they originated as disambiguation markers in 1982, most certainly without any contribution from teenagers.
You’re cherry-picking from the sentence. “Teenagers and others invented emoticons and other abbreviations” is much more inclusive than just the predecessors of modern emoticons. Your selectivity leaves out all the other abbreviations, as well as “others” involved in their invention and development. It took a community, and continues to be one that proliferates. #relax
Hi Kent
Really interesting take. I think the separation of content production and curation is a key trend in all information production environments.
You mention altmetrics. When we wrote the altmetrics manifesto one of our main goals was to advocate the use different metrics for filtering. I would hope we can develop divese quantative metrics that get beyond just popularity. For example, number of uses data associated with a publication by independent authors.
Anyway, it would be interesting to hear your thoughts on whether you think such measures are even possible.
Paul
Usage = popularity to me. There is nothing qualitative about it, except by broad inference. Until there are new metrics that measure some qualitative aspects, we’re not measuring those. It’s possible, and I have some ideas along these lines (e.g., SocialCite), but they are tricky to develop, and there are probably only a few.
I wonder as well if lengthy, rambling content is not just the product of reduced material constraint, but is also a result of the increasing time pressures on academics – they don’t have time to edit their papers as effectively as they might once have had.
As Churchill supposedly said:
“I’m going to make a long speech because I’ve not had the time to prepare a short one.”
I agree that there is an explosion of manuscripts coming out in all forms of media, and that some sort of ‘filter’ is necessary. One problem, as I see it, is that everyone’s filter is likely to be different. As a senior editor of a scholarly biomedical research journal, I see many papers come through that tell me something new scientifically, yet (in my opinion) they are unfit for publication because they don’t move us closer to (again my opinion) the ultimate goal of biomedical research – to improve human lives and treat or eradicate disease. Thus, the manuscript telling me about some novel signaling pathway in a rodent model of human disease IS scientifically worthwhile but I don’t feel compelled to publish it if I can’t link it back to human disease. This is my bias, and I’m Ok with it. I am more than aware that others have their biases as well. And I fully understand that in order to make progress, research must build on that which has come before it; if we don’t publish this paper, others may not be able to draw upon the results to advance the medical field. This can be quite the dilemma!
If the science is meritorious, there is hardly anything that can be said that can’t be said more succinctly. We as editors have a responsibility to make the work more readable, and authors have the responsibility to be open to such editing as long as the meaning doesn’t change.
I’ve rambled on too much already…
Eric, fortunately the dilemma you describe is largely resolved by discovery technology. As long as someone publishes the rodent signaling pathway results the article can probably be found by those who need it (which may be years later) using search. You can focus on your brand without guilt.
But I sometimes think that journal publishers do not give discovery enough credit or attention. I happen to come from a background — federal research reports — where there are millions of documents reporting results but no journals. The importance of discovery is easier to see in that context.