It’s not easy being an expert these days, it seems. Every time you turn around, there’s someone challenging you, raising an objection, making a point. And the proliferation of channels has the potential to not only thin your message but level the playing field with antagonists.
But are experts worth defending from the onslaught of the new information economy?
In an article earlier this summer in the New York Post, David Freedman, obviously pimping his book “Wrong: Why Experts Keep Failing Us–and How to Know When Not to Trust Them,” talks about the pace of change in the medical literature in particular, assigning a reliability problem to a high-churn publishing environment in which frequent, novel findings are prized over infrequent and/or non-novel results. This pursuit of novelty to fill hectic publishing and academic schedules erodes trust in a cumulative fashion as refutation, disputation, and uncertainty emerge in a literature supposedly bent on producing something approximating the truth. And it goes beyond the medical literature, into the ubiquitous split-screens of television shout shows and the blogosphere:
Most people just don’t know how to pick it out from the constant stream of flawed and conflicting findings — the housing market is recovering, the housing market is getting worse, video games deaden children’s brains, video games boost rapid thinking. That’s why much of the public has simply stopped listening to experts, and sometimes with potentially catastrophic results, as when parents don’t get their children recommended vaccines and treatments, or believe they can eat whatever they want, or invest their savings in whatever stocks seem exciting.
The problem returns to filter failure — yet again. But which filter is failing?
That’s a harder question to answer.
There is a common sense filter that all journals sometimes fail, their staffs seduced by some combination of relationships, reputation, and results. There are the uncertainties of study design, study execution, results analysis, researcher rigor, and statistical analysis. In other words, there are problems with doing and reporting science that a research report can elide, minimize, or obscure, either consciously or accidentally. Teasing these out is something that can thwart even the best editors. As Michael Gazzaniga has written:
. . . to separate the verifiable from the nonverifiable is a conscious, tedious process that most people are unwilling or unable to do. It takes energy and perseverance and training. It can be counterintuitive. It is called analytical thinking. It is not common and is difficult to do. It can even be expensive. It is what science is all about. It is uniquely human
Then there’s the filter of peer-review, rife with well-known flaws and limitations. Amplifications and syntheses of research results — the media, surveillance publications, abstracts read in isolation, and interviews with authors — can further complicate results reporting and create unwarranted impressions in the minds of readers and the general public. And more outlets for authors means their enthusiasm for their findings can overwhelm the more measured findings in the source article.
And every channel creates an opportunity for a naysayer or critic or skeptic to appear.
In the era of abundance, traditional filters may be overwhelmed, and experts are looking a bit beat up. And it’s not just abundance, but the tone that abundance has assumed — disputatious, restless, and relentless. A RAND paper covered in the Publishing Frontier blog talks about the extra steps of “bulletproofing” that experts have to attend to in an increasingly vocal and polarized information sphere:
To some of us who were trained to believe that the most important part of the QA process is the scientific peer review, this can sometimes be an alien concept. Of course, the scientific peer review is the sine qua non; the science must speak. But if controversy lurks, bulletproofing is essential. This involves thinking in advance about the political lines of attack against the results and then identifying individuals who might come from those political quarters. Such individuals should be brought into the review process.
Left unprotected in a world filled with relentless demagoguery and spin, experts can flee, become reluctant to engage, and have gaps exposed by unfriendly forces.
Or perhaps experts are a vestige of a mass media age of scarcity, where information imbalances were captured by a select few and exploited for power. In a provocative essay, J.P. Rangaswami writes that the Web is relieving asymmetries in information creation and access, education, and design, all positives overall. So, while expertise may be viewed as eroding, in fact this erosion is part of a leveling function in which experts have to compete in a more dynamic, less authoritarian information environment based on abundance:
There’s been a lot of talk about the web and the internet making us dumber. I think it’s more serious than that. What the web does is reduce the capacity for asymmetry in education. Which in turn undermines the exalted status of the expert. The web makes experts “dumb”. By reducing the privileged nature of their expertise.
Of course, facts are still facts. Or are they? Virginia Heffernan, writing in the New York Times’ column “The Medium,” reflects on the quaint art of fact-checking — how it was done, how it has changed, how “Google became the only thing,” and how fact-checking has become part of everyone’s everyday life now, with some worrisome side-effects:
. . . fact-checking has assumed radically new forms in the past 15 years. Only fact-checkers from legacy media probably miss the quaint old procedures. But if the Web has changed what qualifies as fact-checking, has it also changed what qualifies as a fact? I suspect that facts on the Web are now more rhetorical devices than identifiable objects. But I can’t verify that.
Were we smarter with more books on the shelves and a cadre of experts leading us into the future? Or are we smarter with overlapping, exchanged, shared, compounding, sometimes confusing information available widely, with experts diminished or disposable?
This seems to be a debate that will only be settled with the passage of time.
Or is the expert of the future the one who finds a way to have it all?