Every day that I sit down to finish this post, I revised the opening example; there is a constant supply available to make my central point about integrity of workflow. Did you read the epic Atlantic editor’s note (latest count: 11 paragraphs), last updated on Friday about a recent article? Here’s how it begins:
Editor’s Note: After The Atlantic published this article, new information emerged that has raised serious concerns about its accuracy, and about the credibility of the author.
The article was about elite parents’ obsessions about the potential of elite sports to assist their children’s chances of admission to elite colleges. Full of colorful detail, it was written by a journalist with a track record of colorful detail that turned out to be either false or plagiarized. Eric Wemple of The Washington Post had revealed the deceptions in the Atlantic article with the application of some common sense and a bit more fact-checking. Assessing the magazine’s response, Wemple concluded that “News organizations too rarely exhibit this sort of self-criticism, even when their missteps are plain to see.” At least this one “followed a colossal lapse in judgment with an admirable exercise in accountability.”
In the information business, which broadly speaking includes research, journalism, and scholarship, the infrastructure of information and our response to its design flaws is under intense scrutiny. Last month, researchers Neal Smith and Aaron Cumberledge reported in the Proceedings of the Royal Society that their study of 250 randomly selected citations from five high impact general science journals found a quotation error rate of 25%. Quotation errors, as they explained, are different from citation errors. The latter “include typographical violations of citation styles, redundant or excessive citations” or even, quite crucially, “missing citations.” Quotation errors are when “the reference given fails to substantiate the claim for which it is given.” They are “far more difficult and time-consuming to discover” and dangerous because “they can result in the propagation of unverified or incorrect information.”
Workflow may be 2020’s big opportunity. Here in the US, pandemic, presidential elections, and protests against police violence have joined climate crises, including an especially grave western wildfire season and a hyperactive Atlantic hurricane season, to make 2020 a year when the contrast between low-quality and high-integrity information is especially sharp. It feels like institutional norms for assessing value and veracity have either been bent completely out of shape — or broken. Institutional skeptics – and historians of media — will note that perfectly developed and communicated information was always a chimera. We are not falling from a state of information grace. Those norms were never that robust in the first place, and those norms have been regularly, inherently biased and are thus poorly equipped for the requirements of inclusive reporting and scholarship. In short, we are not just in a period of declining confidence in expertise, media, and science so much as we are in a period when elite institutions can no longer make presumptions about their audience.
How we produce the highest quality information and how we assess information quality comes down to workflow: the process by which we move information from generation to dissemination. Even through the fog of 2020 we can see neon lights around several categories of resolvable workflow issues.
One is underinvestment in workflow, such as disinvestment in editorial process. In late August, Emma Eisenberg wrote about her worries that, if “Fact Checking is the Core of Nonfiction Writing. Why Do so Many Publishers Refuse to Do It?” With plenty of sourcing, Eisenberg described how little fact-checking takes place by publishers, and that when it does, “it is the writer’s legal responsibility, not the publisher’s, to deliver a factually accurate text.” She referenced recent high profile cases of errors in books by Jill Abramson and Naomi Wolf. She described one problem as acquiring editors’ unfamiliarity “with the fundamentals of reporting and fact-checking.” Assuming authors are experts in their fields means they ought to be best placed to manage fact-checking.
But fact-checking your own work is not easy. Fact-checking is its own rigorous process. Eisenberg wrote about how she paid $4,000 to have her 110,000 word manuscript fact-checked. Her fact-checker, she reports, “was excellent – and she found mistakes. Lots of them.” It was an intensive process. They “talked on the phone every day at the height of the fact-check.” Yet when she asked other authors, only half had hired a fact-checker and had “instead relied on a combination of their own diligence, their publisher’s copyediting and / or legal vetting process, as well as correcting mistakes in the paperback brought to their attention by readers of the hardback.”
Eisenberg has called for “a common set of guidelines that set expectations, rates, and protocols for fact-checking.” This is key because “the stakes of not fact-checking books only continue to get higher, as it has become easier and easier to destroy a book’s credibility with a few clicks.” And the attendant impact on the full information ecosystem is significant.
A second problem is when the workflow is under-analyzed. There may be straightforward ways to improve processes, but we’ll only know them when we are regular and rigorous in reviewing our processes. What are the values that our processes are meant to uphold, and are they doing that? Smith and Cumberledge pointed out that having editors do more citation checking is an obvious answer to the problem they identified, but there were other interim solutions that could make a difference, too. None of the journals they studied required page numbers or paragraph numbers for citations; thus, in the verification process, a huge amount of time was taken up looking through lengthy books, reports, and other articles being cited simply to find the information referenced. Requiring page or paragraph numbers would increase the speed of verification. I confess I was floored by this observation, but it is true that more specific citations are called for by some citation styles versus others. Can we not agree that if the infrastructure of knowledge requires us to verify assertions, we need to be able to do so?
Relatedly, when the workflow has long incorporated significant biases, the entire research or writing project is suspect. Reporter Erin Blakemore wrote for The Washington Post last week about sexism in research and science, profiling the #MedBikini social media phenomenon, in which women in science challenged sexism in ideas about what constitutes professional conduct (and attire). Highlighting the ways that sexist assumptions can be “baked into the system” of research and scholarly publication and careers, Blakemore echoed similar coverage of racist bias and its effects. The British Journal of Medicine’s special issue on racism in medicine earlier this year highlighted bias in every stage of medical care, from healthcare outcomes to professional training.
If underinvestment or failure to scrutinize our processes are significant issues, so too when our workflow is simply not fit for its purpose. The size and scale of disinformation or low quality information is a subject of enormous angst. How pernicious, and how effective, is disinformation or misinformation? A more important issue may be whether and how we are set up to handle it. Media scholar Jay Rosen has argued that the mendacity of the current US president is actually under-covered because traditional journalistic codes are exploited to give him – and his lies — more and more airtime. Journalism is designed to cover “timeliness, conflict, anything totally unexpected, anything seemingly consequential, anything that involves a charismatic person whose human interest looms large in the news.” This means that disinformation or misinformation can actually be amplified when it fits those parameters. In this blizzard,Washington Post media analyst and critic Margaret Sullivan argued that fact-checking will be “increasingly pointless.” A new strategy, a new workflow, is called for.
In my field, one of the most political and media-fueled controversies concerns the New York Times 1619 Project. The 1619 Project brought together creative writers, journalists, and scholars to write about the centrality of race, racism, and Black people to American history. An assertion in the lead essay by the project’s designer, Nikole Hannah-Jones, prompted an outcry. Leslie M. Harris, a historian at Northwestern University, wrote one of the most influential assessments of the issue for Politico back in March. Harris wrote that this one specific claim, that protecting slavery was a key motivation for the American Revolution, was wrong, and that it would be used to discredit the project in its totality. The claim, though, went through the fact-checking process, with Harris trying to flag it as incorrect and other information providing what seemed liked confirmation. As Harris and I later wrote for Politico, fact-checking was a process ill-suited to capturing what was or wasn’t a reasonable representation of the history as scholars currently understand it.
Way back in 2012, I remember feeling a lot more curiosity and lot less urgency about how workflow influenced our information economy. Those of you who read John D’Agata and Jim Fingal’s The Lifespan of a Fact or some account of it may remember that it is an odd book, oddly structured and formatted, but nonetheless so full of drama that it became a play starring Daniel Radcliffe — reviewed in the New Yorker no less. And yet how very 2012 it was in the way it veered between the existential and whimsical. The book tells the tug of war between an author and the fact-checker assigned to a draft of a magazine essay. The author sometimes prefers the poetic to the strictly factual, and the fact-checker works to keep him somewhat honest if not strictly accurate. For every assertion of fact there is either a confirmation and the record of the source or a query noting that either it cannot be confirmed or in fact contradictory information turned up. The fact-checker’s correspondence with the author as well as the editor forms a running narrative on the state of truth, such as it can ever be fully discerned.
The meta-lesson of that book, or the one I choose to embrace now, is that absolute truth may be elusive. We are mutually committed to discovering, reviewing, and sharing an accumulating knowledge even while we know that the work is rarely fully conclusive. The state of research and research communication about COVID-19 is a case in point. What information is available to the general public about the value of masks and ventilation given the research on the aerosolization of the virus is much more sophisticated than it was just six months ago. But revealing the process by which we work towards knowledge is absolutely critical. It’s important to reveal that process so we can better refine it, but also to keep faith with our audiences.