Escher Symmetry
Image by Pieter Musterd via Flickr

It’s not easy being an expert these days, it seems. Every time you turn around, there’s someone challenging you, raising an objection, making a point. And the proliferation of channels has the potential to not only thin your message but level the playing field with antagonists.

But are experts worth defending from the onslaught of the new information economy?

In an article earlier this summer in the New York Post, David Freedman, obviously pimping his book “Wrong: Why Experts Keep Failing Us–and How to Know When Not to Trust Them,” talks about the pace of change in the medical literature in particular, assigning a reliability problem to a high-churn publishing environment in which frequent, novel findings are prized over infrequent and/or non-novel results. This pursuit of novelty to fill hectic publishing and academic schedules erodes trust in a cumulative fashion as refutation, disputation, and uncertainty emerge in a literature supposedly bent on producing something approximating the truth. And it goes beyond the medical literature, into the ubiquitous split-screens of television shout shows and the blogosphere:

Most people just don’t know how to pick it out from the constant stream of flawed and conflicting findings — the housing market is recovering, the housing market is getting worse, video games deaden children’s brains, video games boost rapid thinking. That’s why much of the public has simply stopped listening to experts, and sometimes with potentially catastrophic results, as when parents don’t get their children recommended vaccines and treatments, or believe they can eat whatever they want, or invest their savings in whatever stocks seem exciting.

The problem returns to filter failure — yet again. But which filter is failing?

That’s a harder question to answer.

There is a common sense filter that all journals sometimes fail, their staffs seduced by some combination of relationships, reputation, and results. There are the uncertainties of study design, study execution, results analysis, researcher rigor, and statistical analysis. In other words, there are problems with doing and reporting science that a research report can elide, minimize, or obscure, either consciously or accidentally. Teasing these out is something that can thwart even the best editors. As Michael Gazzaniga has written:

. . . to separate the verifiable from the nonverifiable is a conscious, tedious process that most people are unwilling or unable to do. It takes energy and perseverance and training. It can be counterintuitive. It is called analytical thinking. It is not common and is difficult to do. It can even be expensive. It is what science is all about. It is uniquely human

Then there’s the filter of peer-review, rife with well-known flaws and limitations. Amplifications and syntheses of research results — the media, surveillance publications, abstracts read in isolation, and interviews with authors — can further complicate results reporting and create unwarranted impressions in the minds of readers and the general public. And more outlets for authors means their enthusiasm for their findings can overwhelm the more measured findings in the source article.

And every channel creates an opportunity for a naysayer or critic or skeptic to appear.

In the era of abundance, traditional filters may be overwhelmed, and experts are looking a bit beat up. And it’s not just abundance, but the tone that abundance has assumed — disputatious, restless, and relentless. A RAND paper covered in the Publishing Frontier blog talks about the extra steps of “bulletproofing” that experts have to attend to in an increasingly vocal and polarized information sphere:

To some of us who were trained to believe that the most important part of the QA process is the scientific peer review, this can sometimes be an alien concept. Of course, the scientific peer review is the sine qua non; the science must speak. But if controversy lurks, bulletproofing is essential. This involves thinking in advance about the political lines of attack against the results and then identifying individuals who might come from those political quarters. Such individuals should be brought into the review process.

Left unprotected in a world filled with relentless demagoguery and spin, experts can flee, become reluctant to engage, and have gaps exposed by unfriendly forces.

Or perhaps experts are a vestige of a mass media age of scarcity, where information imbalances were captured by a select few and exploited for power. In a provocative essay, J.P. Rangaswami writes that the Web is relieving asymmetries in information creation and access, education, and design, all positives overall. So, while expertise may be viewed as eroding, in fact this erosion is part of a leveling function in which experts have to compete in a more dynamic, less authoritarian information environment based on abundance:

There’s been a lot of talk about the web and the internet making us dumber. I think it’s more serious than that. What the web does is reduce the capacity for asymmetry in education. Which in turn undermines the exalted status of the expert. The web makes experts “dumb”. By reducing the privileged nature of their expertise.

Of course, facts are still facts. Or are they? Virginia Heffernan, writing in the New York Times’ column “The Medium,” reflects on the quaint art of fact-checking — how it was done, how it has changed, how “Google became the only thing,” and how fact-checking has become part of everyone’s everyday life now, with some worrisome side-effects:

. . . fact-checking has assumed radically new forms in the past 15 years. Only fact-checkers from legacy media probably miss the quaint old procedures. But if the Web has changed what qualifies as fact-checking, has it also changed what qualifies as a fact? I suspect that facts on the Web are now more rhetorical devices than identifiable objects. But I can’t verify that.

Were we smarter with more books on the shelves and a cadre of experts leading us into the future? Or are we smarter with overlapping, exchanged, shared, compounding, sometimes confusing information available widely, with experts diminished or disposable?

This seems to be a debate that will only be settled with the passage of time.

Or is the expert of the future the one who finds a way to have it all?

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

6 Thoughts on "Rectifying Asymmetries — Experts Are Battered From All Sides, But Are We Any Smarter?"

Welcome to the age of controversy, which by the way goes hand in hand with analytical thinking. Note too that it is often experts challenging experts, which they have always done, but now it is easy and public. And if that public is confused thereby it just shows that they did not understand the true situation before. Uncertainty has always been the byproduct of enlightenment.

I completely agree. If you read the fact-checking piece by Heffernan in the NYTimes, it’s clear that scarcity drove certainty even for the best fact-checkers. Now that abundance is here, we have information and controversy in abundance. The world is looking more like a set of academic departments than ever before, I’ll bet.

It’s no fun if you agree with me. Here’s a related question, why are scientific controversies so well hidden in journal articles? No one ever says someone else is wrong, even when their data does. You have to be an expert on the topic to see that Smith and Jones are fighting it out, unless you go to the conference and see them tearing away in Q&A. The journals seem to suffer from an excess of politeness, which obscures the state of the science.

I think that’s a cultural thing. Academics are hesitant to disagree in writing because they’ll look disagreeable and could be embarrassed if it turns out their disagreement was baseless. The reputational advantages of journals cut both ways — they raise articles toward the exalted heights but dim the voices of the practitioners. Because there’s so much pride involved, it takes a real gambler to risk putting a stupid objection on their permanent record.

When will the culture change? Probably in about 20 years, when most researchers will be used to disagreeing online.

Your comments on “amplification”, and the impact of media companies on authors and on publishers immediately brought to mind the controversial recent case of Darwinius as a prime example:

Once the paper was released, scientists and journalists began looking at the technical details of the new fossil. Surprisingly, what the paper presented differed quite sharply from what was being marketed to the public…In public Darwinius was being presented as one of our ancestors—particularly by Hurum—while the scientific study offered a different hypothesis which its authors did not feel fully comfortable advocating. The fossil primate seemed to have two distinct identities: Darwinius, the object of scientific scrutiny; and “Ida,” the media darling.

Also the following quote from Brian Eno:

I notice that the idea of ‘expert’ has changed. An expert used to be ‘somebody with access to special information’. Now, since so much information is equally available to everyone, the idea of ‘expert’ becomes ‘somebody with a better way of interpreting’. Judgement has replaced access.

The problem, of course, is that the way most people decide who has the “better way of interpreting” is usually “whoever says what I want to hear,” rather than “who is right.”

Comments are closed.