The principle of entropy states that systems tend toward their most stable state, and usually that stable state is that of lowest energy. Many markets exhibit the same behavior, and over time become dominated by low-quality, low-cost products. Market leaders are usually “good enough” — they aren’t well-loved by users, but they fulfill the user’s needs at a lower cost than higher quality products that provide a significantly better user experience. Wired recently published several articles looking at this principle, and they should be required reading and food for thought for any publisher.
“The Good Enough Revolution: When Cheap and Simple Is Just Fine” takes a look at a variety of markets where highly advanced, feature-laden products are shunned in favor of cheap, low-quality products. The article has everything you’d expect from a current look at technology and behavior, including quotes from Clay Shirky and a reference to The Innovator’s Dilemma. It talks about the rise of low-fidelity MP3 files over better-sounding CD and album tracks. Other examples include changes in military hardware, health clinics, and legal aid. Probably the best example is the emergence of the Flip at the expense of fancier video cameras, made even more telling by the recent announcement of iPod Nanos with video cameras, which are sure to out-“good enough” the Flip and drive it from the market lead.
The article has some flaws — it focuses more on changing consumer desires:
. . . what consumers want from the products and services they buy is fundamentally changing. We now favor flexibility over high fidelity, convenience over features, quick and dirty over slow and polished. Having it here and now is more important than having it perfect.
I’d argue that all takes a backseat to price, which is the real driving force here. I don’t think anyone deliberately prefers lower-quality products. However, most people do prefer to spend less money, and if a lower-quality product is good enough to meet their needs, they’ll buy it instead of more expensive options where quality is superfluous. There are also network effects and lock-in issues. And, as anyone who used a Mac back in the early days of Windows can tell you, the rise of inferior but cheaper products is certainly not a new phenomenon.
The second article is “Why Craigslist Is Such a Mess”, and it delves into the site’s continued success despite its horrible design, confusing interface, and lack of modern features:
Think of any Web feature that has become popular in the past 10 years: Chances are craigslist has considered it and rejected it. If you try to build a third-party application designed to make craigslist work better, the management will almost certainly throw up technical roadblocks to shut you down.
There’s a great deal of detail here about the founder’s eccentricities and the company’s odd ways of doing business, but the important point is their reasoning for keeping the site simple: users don’t seem to care about the design or exclusion of new features as long as the site does what they want it to do. And as above, the key factor is that Craigslist is cheap, if not free, for every transaction.
What does this seemingly inevitable evolutionary path toward lo-fi and cheap mean to scholarly publishers? Despite our shared interest in creating better, more feature-laden products for readers, the marketplace is likely to take a different direction. It’s difficult to reconcile — I’m a firm believer in quality and that it separates our products from the free offerings we increasingly compete with online. Is there a place for both “good enough” and high quality in the market? Or are our efforts just building bigger dinosaurs likely to be outpaced by smaller, more efficient mammals?
A few strategies have emerged that resemble this trend and are worth watching. Journal readers have clearly chosen PDF as the preferred format for reading papers. In some ways, this is the equivalent of MP3. It’s strictly limited compared to the more flexible and connected information delivery available in HTML versions of articles. The online versions offer things like movies and audio files, and advanced tools for commenting and interacting with the authors and other readers. These limitations don’t seem to matter to readers — the PDF version is “good enough,” and the online enhancements we’re all so excited about are not seen as important. If we were to take this to its extreme, we’d eliminate almost all online features and model our journals on a Craigslist-like simplicity, just having a listing of abstracts and downloadable PDF files. (Hmm, sounds a lot like arXiv.org to me.)
Another streamlining strategy is evident from PLoS ONE. PLoS ONE takes the position that much of the usual journal editorial process is both unnecessary and counterproductive. The journal removes the often time-consuming and expensive level of editorial oversight and careful article selection. Articles are peer reviewed, and those judged to be technically sound are published. Most publishers assume that readers want this additional editorial level of filtering. The scientists I’ve spoken with argue that their schedules are overcrowded, and that they do read specific journals because they know the editors will make sure all content lives up to a particular level of expectation. PLoS ONE makes the counterpoint that this filtering is something the market can live without and eliminates it, using the cost savings to power an author-pays business model that allows open access for readers.
E-books are another area where this principle comes into play. E-books lack much of what a paper book offers: color, layout, typography, design, and actual ownership and the benefits that come with it (re-sale, loaning, etc.). The question is whether these things are really necessary, and whether the lower quality e-books are good enough for most readers, particularly given their lower price point and advantages in convenience and immediacy. If historical precedent is any indicator, the answer is in the e-books’ favor. While e-books open up new avenues for content and interfaces, cheap dumps of unformatted text may end up the dominant form.
All that said, there are many ways that the scholarly publishing market differs from other markets, so the trends that emerge may not be identical. For journal publishers, there’s a level of insulation between our users (readers and authors) and our actual customers (librarians, who pay for subscriptions). The graduate student reading a paper isn’t usually paying directly out of his own pocket for that article — because of this, she may be less willing to sacrifice quality.
While cheap and “good enough” dominate most markets, there’s still room for high-end companies like Apple to thrive and remain profitable. For many scholarly publishers, often not-for-profits or parts of academic institutions or societies, market domination is not the goal. We don’t have shareholders to please, and can do things because they’re good for a field of study or improve communication, rather than because they improve the bottom line.
But it pays to understand the concepts behind “good enough” products — you’ll either be producing them or competing against them.
16 Thoughts on "Is “Good Enough” Good Enough for You?"
David – I strongly agree that price rather than flexibility is frequently the driving force with ‘good enough’ products. In terms of PDFs, however, I think the key is portability.
AIFF/wav and MP3 files are equally portable and both will play on your iphone; it’s just that MP3s are much smaller. The alternatives to PDF files (XML/HTML and assorted linked files), by contrast, are not easily downloadable and transferable between devices…yet.
An e-reading device that most readers are comfortable with – rather than just the early adopters who favor the Kindle or are happy to squint at papers on an iphone – may completely change the landscape. In such a scenario, it might be XML files that are analogous to MP3 files – not PDF files, which could become as obsolete as a prerecorded cassette because they are incompatible with the new device.
The lesson for publishers is to bet on the format that the ‘good-enough’ e-reader will use. I’d bet that will be a mark-up language not a file-type rooted in typesetting.
Good points Richard. As noted in the article, there is often a layer of insulation between the reader and price in the scholarly publishing world, so you’re right in price not being the driving factor for pdf’s dominance. Accessibility is definitely one of the main factors that drove adoption. Internet connectivity was nowhere near as ubiquitous back when readers were first figuring out online journals. On top of that, accessing a subscription behind a paywall is still difficult for many when they’re away from their home institution. So pdf lets them have access to the paper anywhere, anytime.
The other driving factor was readability. While it’s probably dropped in recent years, the dominant method for reading papers has always been printing out the pdf and reading it on paper, not on the screen. The pdf provided the highest quality reading experience, including the professional layout and typography employed by most journals. Reading off a screen is lower resolution, and printing out the html version is wildly variable, and less readable than the pdf. But all that may be changing now, as readers are more comfortable reading off a screen, and portable devices like iPhones, Blackberrys and Kindles are changing reading habits.
Pdf’s are pretty miserable on the iPhone. The problem is that xml versions are pretty bad as well, particularly due to the lack of standards, resulting in wild variability from device to device. I’m thinking that, given the nature of accessing most subscription-based journals, we’re still going to need a downloadable version of papers to allow the portability and accessibility that was so important with pdf, as even if everyone has an iPhone, they won’t always be able to access their institutional subscription to read an online version. Could the interactive advanced features of the online version be built into this format, and spring to life when connectivity is available? Then again from the examples in the linked articles, those things may be superfluous, and even a poorly designed, hard to read format may trump readability and advanced features if it offers better accessibility/price.
This encapsulates a battle librarians have been fighting, perhaps fruitlessly, for years. The best example I can think of, beyond the dumb-PDF vs. smart-HTML dichotomy, is that between Google Scholar and the Cadillac index databases libraries subscribe to for their users. GS is a classic example of a bare-bones (and, if you believe Peter Jacso, a very dumb) system, the Craigslist of the scholarly comm world. A good searcher can tick off a whole list of reasons why someone should not use GS to do “real” searching, yet this argument is lost on people — students especially, but also PhD researchers who should know better — who are quite happy with “good enough”. They’re not interested in all the bells and whistles a top-flight tool will give them, even when they see them demonstrated. They’ll say “wow, that’s cool” and then immediately go back to Google.
A few years ago we were sanguine about this and decided to support Google Scholar in hopes that it would improve, but it hasn’t. Yet it’s still good enough for many. I wonder when the other shoe will finally drop and we have to decide about the subscription databases.
Sustainability may be another factor. One perception is that the simpler a thing is, the less likely will be to ‘break’, be it a blender or an applet.
One note on Richard Sever’s comment, above. I fully agree w/r/t xml vs. pdf. If you think about it, pdf’s are built to make printing easy. There have been computer displays that were 8-1/2 x 11″, but those are/were specialty units intended to enable the user to have a full-page view of what *the printout* would look like. Pdf’s are still popular, but I predict that that format will eventually become a liability with the increase in numbers of people who no longer wish to print that which they are about to read.
Another “good-enough” product to add to your list, David: blogs. We now seem to think that reading 500-word hastily-written opinion pieces by people who don’t read widely or write very well can substitute for finding (and sometimes actually paying for) thoughtful, carefully planned and well-written articles and books by knowledgeable people who look beyond the latest fads and press releases. As someone who scans dozens of blogs and a smaller number of Twitter feeds every day, I have reluctantly come to accept as “good enough” the half-baked observations about technology and publishing that pass for insight these days, and I accept them because it is my job to ferret out the significant trends in these subject areas from all the noise that is the blogosphere. One gets better at knowing where to look for the likely nuggets, but even the good stuff is often only “good enough.” The Scholarly Kitchen is usually an exception in this regard and is therefore high on my must-read list. In your well-organized piece, for example, you actually reference two long, well-researched and informative articles from a respected source of technology information and opinion, refer appropriately to Christensen’s influential book (and by implication his ideas of “disruptive technology” that underlie some of your arguments), and relate your themes effectively to the state of scholarly publishing. Thank you. I have now read at least one good blog post today.
Hard to believe I didn’t mention blogs, since as we all know, the act of blogging is the number one subject area for blogs. It’s a really good point though, and much to blame for the decline in quality journalism and the issues currently faced by newspapers and magazines.
This is an interesting discussion for many reasons, but I see things a little differently. We are accustomed to a journal world, for example, in which articles are temporally unassociated, bound in paper, and published long after they are ready to be. Yet, many equate this with “quality” even today. Quality journalism often only emerges with historical hindsight — Woodward and Bernstein were great, but many other journalists of the era were apologists for Nixon or missed the story. We forget about them now. Today’s mainstream journalism is certainly weird (FOX News being the most twisted of the lot), with the move to infotainment being the most troubling aspect.
Blogging is just a communication technology. I actually think good blogs don’t insult readers’ intelligence like mainstream media does, there are many good blogs, and there is a tonic in the information environment. What is polluting the mainstream media space aren’t journalists, but executives who are misplaced and consolidations that have gone too far, in my opinion.
Kent, I think there are separate issues here between the quality of the content itself, and the quality of the overall user experience. Both a Rolls Royce and horse cart will get me from New York to Boston, the same net result, but my enjoyment of the journey will probably differ.
And while there are high quality blogs available, I do think there’s a drop-off in information delivered when one goes from a professional writer who has directly interviewed a news subject to a blogger who is summarizing the article written by that writer to a 140 character tweet summarizing that blog entry. Though often, that tweet is “good enough” for most people’s needs.
But when that tweet contains a link to a complete web site of information or a video of actual events or a multi-dimensional data display — well, that’s another matter entirely.
Absolutely. The question is whether the reader follows the link and reads the actual story, or if the one line summary is all they want. The linked source may have video of the event, interviews with participants, background stories, etc., but they may all be superfluous to the reader. The primitive “good enough” tweet often substitutes for the detailed, nuanced explanations.
And one can always ask Jeff Goldblum about how that works….
Was he listening to “War of the Worlds” at the time? Gullibility is an audience trait, not a fault of journalism, as is laziness.
One challenge I think publishers face is to get real experts and make them facile with modern tools. As long as tools that create major audiences alienate expert users, or are viewed as “lower quality” even though they are quite the opposite, there’s going to be a gap. If publishers can close that gap, it’s better for everyone.
I think you are assuming that all consumers want the same thing. To take the MP3 vs higher quality CD example. There will always be listeners who value the sound quality and will prioritize their spending to get high quality equipment. They also prioritize their time use to enable them to listen to music in the best possible conditions.
But if you are looking for something to occupy your mind while you take the bus to work, a portable device that enables you to carry a lot of music with you easily is probably all you need. The sound quality is not problematic because the listening conditions are not ideal for high quality sound anyway.
In terms of scholarly publishing, there may be similar splits though they are likely not to be about different readers but the same reader at different points in time.
There are times when all you want to know is what research is being done in a particular field an what the main findings are. Skimming abstracts is going to get you that information and help you feel moderately up to date. One of the reasons publishers e-mailing tables of contents is such a fabulous idea. At least you have been able to see what is coming out even if you haven’T had time to read it (though I bet folks then download the PDF for the ones they want to read so they can read them later).
But sometimes you really want to delve into the detail. You want to read the best material. You want to know how the study was done. You are planning to engage with that work in your own work. You need to know more.
As for reading online versus printing out, the two require different layouts. Certainly on a regular computer, there is way to much scrolling involved if you use a portrait layout and a font size suitable for printing. To read it, you need to increase the size and then the whole page doesn’t fit on the screen and you spend all kinds of time scrolling. Landscape and bigger font would allow you to easily read on a computer screen but then you have to sit in your office chair or run down the laptop battery to read on the couch. Don’t even think about reading in the bath!
The point is that “quality” is not fixed. It is contingent on the criteria that are important to a particular consumer. And it has long been known that there are “mass” consumers who will opt for low quality at low price, and smaller markets who will pay extra to get higher quality and better service.
Knowing the audience is key.
As noted in the article, “good enough” is not the only viable business model and it is certainly possible to thrive by appealing to smaller niches willing to pay more for quality.
User needs do change depending on the situation. For example, away from home with no internet connectivity, a pdf is a lot more valuable than a beautifully designed webpage. There’s a balance to be had between trying to be all things to all people and in just going for the lowest common denominator. Although the companies that do the latter really well seem to dominate their markets.