St. Augustine writing, revising, and re-writin...
Image via Wikipedia

A few weeks ago, the Columbia Journalism Review released a study comparing print magazine editing and online magazine editing, contrasting print and online copy editing, fact-checking, and error correction practices. The bottom-line finding? Online editors treasure speed and audience first and foremost, so spend less time fact-checking and copy editing. And when they correct errors, they do it quickly and quietly unless it’s egregious.

In a related post, I noted how many of the findings and attitudes in the study smacked of “vestigial elitism” among traditional editors and highlighted the cultural divide between print editors and digital editors.

Ultimately, print and online seemed to have different ways of measuring quality.

Now, Ben Elowitz, writing for PaidContent.org, has another, very interesting perspective on these issues, captured in an excellent essay entitled, “Traditional Ways of Judging ‘Quality’ in Published Content Are Now Useless.”

The article bears close attention and multiple readings, especially if you are afraid you may suffer from “vestigial elitism” (or believe in Seth Godin’s “future quo“). To summarize, Elowitz covers four quality measures from the age of scarcity that haunt us still, but which he thinks are less relevant to the audience than ever before:

  1. Credential — Elowitz believes that utility and availability trump credentialed information for consumers now.
  2. Correctness –– Elowitz argues that correctness was patronizing, and users don’t need to be or want to be patronized by editors in a world of abundant information. They can figure out errors themselves.
  3. Objectivity — Elowitz posits that this was a demand of scarcity. Now, with multiple sources and habits of browsing, users blend various opinions and want them. Objectivity is actually boring.
  4. Craftsmanship — Elowitz maintains that in the digital realm, content beats craftgoods, so time spent doing anything other than creating content is wasted time.

It’s an interesting premise, and each observation merits some translation into the world of scholarly publishing.

Perhaps the one that jumps out first is “credential.” In academia, credentials still carry a lot of weight — where you were published matters to tenure committees, promotion decisions, and the like. However, while a scaffolding of credentials is likely to endure longer in scholarly publishing, utility and availability provide a competitive infrastructure from which to hang relevance and importance. As Phil Davis has noted in his studies, freely available content is downloaded more than paywalled content, yet there is no discernible citation advantage. This seems to strike at the heart of the matter. If the framework of rewards and incentives around credentialing changes to include or favor downloads, for instance, the balance may shift entirely, and credentialed (but slow and irrelevant) content may find itself on the short end of the bargain.

Jumping to the end of the list, the idea that craftsmanship is dispensable is one that I think hits at an emotional hot spot for most of us. To think that the lovely papers, typography, offset color printing, and brand logotypes we treasure don’t matter much to the users — well, that just hurts. But it’s probably true. We are probably spending far too much time, money, and emotional energy preserving a 19th century craft in the 21st century. Perhaps it’s time to become digital natives, in the sense that we stop measuring our aesthetic achievements by print standards. After all, which is more valuable over the next few years — the printed logo on the face of your product, or your URL?

Correctness and objectivity — those two are very interesting, indeed. There are microtensions in each, I sense. Is correctness more important than speed and relevance? In a fragmented information sphere, the questions, “Correct for who? And for what purpose?” certainly arise, and may justify speed. Objectivity is oddly full of subjectivity — is it more a voicing style than a reality achieved? Even statisticians disagree on the importance and meaning of some apparently objective techniques.

Ultimately, there’s a range to each of these two. How correct do you need to be? And how much objectivity is reasonable and achievable? The invitation is to extend the range of acceptable practices for the sake of digital relevance and speed.

It’s not that we should abandon these dimensions of quality entirely, just that they don’t matter as much as they used to, as Elowitz states:

It’s not that these four criteria are entirely dead: Regular errors, lapses of disclosure, and sloppy storytelling are all bound to negatively impact a publisher’s reputation, inasmuch as they negatively impact the audience. But they are no longer the relevant yardsticks for “quality,” in the sense that scoring fantastically high on them is no recipe for success. That’s because they are all in the eye of the wrong beholder.  Looking at these four old criteria for quality, they all share the same source:  they are based on the belief that a publisher controls the audience’s experience; and the audience’s access to content is scarce. Sure, this was true 10 years ago, but today it’s absolutely false.

The bigger question for publishers is whether adherence to these quality standards, through explicit decisions or implicit cultural norms, holds us back. Are we too bound to print standards, old standards of correctness, and the mock voice of objectivity to create the content users today want?

What kind of quality is irrelevance?

Reblog this post [with Zemanta]
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

7 Thoughts on "Traditional Measures of Quality: Irrelevant, Miscast, Outdated, and Inhibiting?"

Elowitz’s analysis is valuable in many respects and his conclusions do get to the heart of the gap between the online and print worlds (and I use the term “print” as a mindset, not a medium). But I find his conclusions, based on my own experience with professional audiences, a bit oversimplified if one believes that “consumers,” even those who widely use content developed primarily for the online environment, are not monolithic.

For example, I don’t know that “utility and availability trump credentialed information,” but I do think authoritative information (by any number of defensible measures), with high utility, made available widely and quickly, can and does trump much traditionally vetted and styled information that is crafted for the ages.

My own experience is that this is especially true in scientific and clinical disciplines where the state of the art moves quickly. As a result, simply looking at downloads and citations doesn’t tell the whole story; hence the search, so far incomplete, for new, reliable, relevant measures of impact and value in a digital world.

I could make similar arguments about Elowitz’s other points, well-observed and useful as discussion points as they are.

That said, the question you pose about whether scholarly publishers are too bound to traditional standards is a good one, and the answer, in my view, is “yes.” For all the work being done to harness the power of digital technology in the scholarly publishing world, many our colleagues, even as they talk about “getting out of print,” still use the web primarily as a distribution mechanism, bound by a traditional print-focused mindset. If not, why would most publishers still be so slow to embrace powerful multimedia tools and non-traditional but valued scholarly communications models (eg, appropriately developed summaries of key professional meetings) not just as supplements to traditional papers or “front of the book” matter but as fundamental elements of scholarly communication?

How true, and how sad. Soon all publications will be Fox “News”.

I don’t take that dim a view of the changes. In fact, I don’t think Fox “News” is a good model at all for Quality 2.0, if you’ll forgive the term. As Elowitz says, these quality dimensions still matter, but are not the defining measures now. So, doing them extremely well won’t win the day. You have to do them well enough so that the audience doesn’t think you’re full of it, sloppy, biased, or ridiculous, which is why Fox “News” is not the end state.

I disagree with the statement about “correctness” – and there may be an important distinction between “wanting” and “needing” to be patronized.

A recent study by the Center for Studies in Higher Education at Berkeley identified the Google Generation’s lack of ability to independently assess material online as a significant problem for scholarly communication.

This seems to imply that we still need “craftsmanship” as a heuristic to signal to readers what is credible. Craftsmanship requires effort which means that only successful brands would be able to afford the outlay of resources.

Comments are closed.