Every day that I sit down to finish this post, I revised the opening example; there is a constant supply available to make my central point about integrity of workflow. Did you read the epic Atlantic editor’s note (latest count:  11 paragraphs), last updated on Friday about a recent article? Here’s how it begins:

Editor’s Note: After The Atlantic published this article, new information emerged that has raised serious concerns about its accuracy, and about the credibility of the author.

The article was about elite parents’ obsessions about the potential of elite sports to assist their children’s chances of admission to elite colleges. Full of colorful detail, it was written by a journalist with a track record of colorful detail that turned out to be either false or plagiarized. Eric Wemple of The Washington Post had revealed the deceptions in the Atlantic article with the application of some common sense and a bit more fact-checking. Assessing the magazine’s response, Wemple concluded that “News organizations too rarely exhibit this sort of self-criticism, even when their missteps are plain to see.”  At least this one “followed a colossal lapse in judgment with an admirable exercise in accountability.”

computer keyboard with one red key labeled for fact checking.

In the information business, which broadly speaking includes research, journalism, and scholarship, the infrastructure of information and our response to its design flaws is under intense scrutiny. Last month, researchers Neal Smith and Aaron Cumberledge reported in the Proceedings of the Royal Society that their study of 250 randomly selected citations from five high impact general science journals found a quotation error rate of 25%. Quotation errors, as they explained, are different from citation errors. The latter “include typographical violations of citation styles, redundant or excessive citations” or even, quite crucially, “missing citations.” Quotation errors are when “the reference given fails to substantiate the claim for which it is given.” They are “far more difficult and time-consuming to discover” and dangerous because “they can result in the propagation of unverified or incorrect information.”

Workflow may be 2020’s big opportunity. Here in the US, pandemic, presidential elections, and protests against police violence have joined climate crises, including an especially grave western wildfire season and a hyperactive Atlantic hurricane season, to make 2020 a year when the contrast between low-quality and high-integrity information is especially sharp. It feels like institutional norms for assessing value and veracity have either been bent completely out of shape — or broken. Institutional skeptics – and historians of media — will note that perfectly developed and communicated information was always a chimera. We are not falling from a state of information grace. Those norms were never that robust in the first place, and those norms have been regularly, inherently biased and are thus poorly equipped for the requirements of inclusive reporting and scholarship. In short, we are not just in a period of declining confidence in expertise, media, and science so much as we are in a period when elite institutions can no longer make presumptions about their audience.

How we produce the highest quality information and how we assess information quality comes down to workflow:  the process by which we move information from generation to dissemination. Even through the fog of 2020 we can see neon lights around several categories of resolvable workflow issues.

One is underinvestment in workflow, such as disinvestment in editorial process. In late August, Emma Eisenberg wrote about her worries that, if “Fact Checking is the Core of Nonfiction Writing. Why Do so Many Publishers Refuse to Do It?”  With plenty of sourcing, Eisenberg described how little fact-checking takes place by publishers, and that when it does, “it is the writer’s legal responsibility, not the publisher’s, to deliver a factually accurate text.”  She referenced recent high profile cases of errors in books by Jill Abramson and Naomi Wolf. She described one problem as acquiring editors’ unfamiliarity “with the fundamentals of reporting and fact-checking.” Assuming authors are experts in their fields means they ought to be best placed to manage fact-checking.

But fact-checking your own work is not easy. Fact-checking is its own rigorous process. Eisenberg wrote about how she paid $4,000 to have her 110,000 word manuscript fact-checked. Her fact-checker, she reports, “was excellent – and she found mistakes. Lots of them.” It was an intensive process. They “talked on the phone every day at the height of the fact-check.” Yet when she asked other authors, only half had hired a fact-checker and had “instead relied on a combination of their own diligence, their publisher’s copyediting and / or legal vetting process, as well as correcting mistakes in the paperback brought to their attention by readers of the hardback.”  

Eisenberg has called for “a common set of guidelines that set expectations, rates, and protocols for fact-checking.” This is key because “the stakes of not fact-checking books only continue to get higher, as it has become easier and easier to destroy a book’s credibility with a few clicks.” And the attendant impact on the full information ecosystem is significant.

A second problem is when the workflow is under-analyzed. There may be straightforward ways to improve processes, but we’ll only know them when we are regular and rigorous in reviewing our processes. What are the values that our processes are meant to uphold, and are they doing that? Smith and Cumberledge pointed out that having editors do more citation checking is an obvious answer to the problem they identified, but there were other interim solutions that could make a difference, too. None of the journals they studied required page numbers or paragraph numbers for citations; thus, in the verification process, a huge amount of time was taken up looking through lengthy books, reports, and other articles being cited simply to find the information referenced. Requiring page or paragraph numbers would increase the speed of verification. I confess I was floored by this observation, but it is true that more specific citations are called for by some citation styles versus others. Can we not agree that if the infrastructure of knowledge requires us to verify assertions, we need to be able to do so?

Relatedly, when the workflow has long incorporated significant biases, the entire research or writing project is suspect. Reporter Erin Blakemore wrote for The Washington Post last week about sexism in research and science, profiling the #MedBikini social media phenomenon, in which women in science challenged sexism in ideas about what constitutes professional conduct (and attire). Highlighting the ways that sexist assumptions can be “baked into the system” of research and scholarly publication and careers, Blakemore echoed similar coverage of racist bias and its effects. The British Journal of Medicine’s special issue on racism in medicine earlier this year highlighted bias in every stage of medical care, from healthcare outcomes to professional training.  

If underinvestment or failure to scrutinize our processes are significant issues, so too when our workflow is simply not fit for its purpose. The size and scale of disinformation or low quality information is a subject of enormous angst. How pernicious, and how effective, is disinformation or misinformation? A more important issue may be whether and how we are set up to handle it. Media scholar Jay Rosen has argued that the mendacity of the current US president is actually under-covered because traditional journalistic codes are exploited to give him – and his lies — more and more airtime. Journalism is designed to cover “timeliness, conflict, anything totally unexpected, anything seemingly consequential, anything that involves a charismatic person whose human interest looms large in the news.” This means that disinformation or misinformation can actually be amplified when it fits those parameters. In this blizzard,Washington Post media analyst and critic Margaret Sullivan argued that fact-checking will be “increasingly pointless.”  A new strategy, a new workflow, is called for.

In my field, one of the most political and media-fueled controversies concerns the New York Times 1619 Project. The 1619 Project brought together creative writers, journalists, and scholars to write about the centrality of race, racism, and Black people to American history. An assertion in the lead essay by the project’s designer, Nikole Hannah-Jones, prompted an outcry. Leslie M. Harris, a historian at Northwestern University, wrote one of the most influential assessments of the issue for Politico back in March. Harris wrote that this one specific claim, that protecting slavery was a key motivation for the American Revolution, was wrong, and that it would be used to discredit the project in its totality. The claim, though, went through the fact-checking process, with Harris trying to flag it as incorrect and other information providing what seemed liked confirmation. As Harris and I later wrote for Politico, fact-checking was a process ill-suited to capturing what was or wasn’t a reasonable representation of the history as scholars currently understand it.

Way back in 2012, I remember feeling a lot more curiosity and lot less urgency about how workflow influenced our information economy. Those of you who read John D’Agata and Jim Fingal’s The Lifespan of a Fact or some account of it may remember that it is an odd book, oddly structured and formatted, but nonetheless so full of drama that it became a play starring Daniel Radcliffe — reviewed in the New Yorker no less. And yet how very 2012 it was in the way it veered between the existential and whimsical. The book tells the tug of war between an author and the fact-checker assigned to a draft of a magazine essay. The author sometimes prefers the poetic to the strictly factual, and the fact-checker works to keep him somewhat honest if not strictly accurate. For every assertion of fact there is either a confirmation and the record of the source or a query noting that either it cannot be confirmed or in fact contradictory information turned up. The fact-checker’s correspondence with the author as well as the editor forms a running narrative on the state of truth, such as it can ever be fully discerned.

The meta-lesson of that book, or the one I choose to embrace now, is that absolute truth may be elusive. We are mutually committed to discovering, reviewing, and sharing an accumulating knowledge even while we know that the work is rarely fully conclusive. The state of research and research communication about COVID-19 is a case in point. What information is available to the general public about the value of masks and ventilation given the research on the aerosolization of the virus is much more sophisticated than it was just six months ago. But revealing the process by which we work towards knowledge is absolutely critical. It’s important to reveal that process so we can better refine it, but also to keep faith with our audiences.

Karin Wulf

Karin Wulf

Karin Wulf is the Beatrice and Julio Mario Santo Domingo Director and Librarian at the John Carter Brown Library and Professor of History, Brown University. She is a historian with a research specialty in family, gender and politics in eighteenth-century British America and has experience in non-profit humanities publishing.

Discussion

9 Thoughts on "There is No Truth, There is Only Workflow"

I think it’s important that you noted the cost of fact-checking for the author in the post, as it brings up what seems to be one of the main conflicts in the current world of scholarly publishing, cost versus quality. We hear increasing outcry about reproducibility, about fraud, about the thoroughness of the peer review process. All of these things can be improved, but not without significant effort, which inevitably leads to increased costs. At the same time, we are hearing outrcry against the costs of scholarly publishing — why is it so expensive? There are demands for price transparency, and discussions of caps on APCs, all while subscription packages are being dropped for being unaffordable. Some go so far as wanting preprints to suffice for publication, just put everything up online and over time people will come along and correct it or leave negative reviews on it and that will solve all problems with inaccurate information.

I’m not sure how to reconcile these two seemingly opposing approaches, as the community clearly wants more, harder work done to create a higher quality output, and at the same time wants to cut spending, leaving even less in the coffers to pay for that extra work.

Also, I really love the idea of requiring page numbers on citations for the exact spot in the cited work where the claim being made is supported. I suspect that researchers, at least scientists unused to doing this, would rebel against the extra workload required.

I was amazed that the page (or paragraph) numbers aren’t required which just goes to show you how often I read those high impact factor general science journals. But it seems like such a crucial fix to this problem as well as a contribution to the structure of knowledge.

And I agree about the contradiction between it should all be free but actually quality is quite expensive. I think we have to believe that high integrity information is crucial and then figure out how we do that at scale/ cost.

The article you are referring to has an idiosyncratic idea of a correct citation. About half of the errors are ‘impossible to substantiate’ errors. Their example is:

“For example, an article might merely mention a novel material and cite a reference discussing that material. There is no specific proposition being made. The reference is simply giving additional background information”

I would not describe this as an error – and certainly not one that undermines the veracity of article. Additionally, I checked the first example of an ‘impossible to substantiate’ citation, and it is a citation in a figure caption to the original data included as part of the figure, correctly reproduced. I think this isn’t particularly strong evidence of citation practices being a key issue in scholarly publishing, and making authors cite one page of a six page article doesn’t seem to me to be a useful improvement, no matter the standard practice in the humanities.

We need to reimagine citations and citation managers, allowing authors to indicate the exact sentences they’re referring to in the article that they’re citing, and have this citation metadata carry over into the published manuscript.

Thanks, Tim, I think that’s true. That would be helpful. But also — I am amazed that the citations in these publications were ever allowed to be so vague. Of course I work in a discipline that embraces a citation style that’s very specific, but even through my Chicago Manual of Style-colored glasses, I have a hard time understanding why you wouldn’t require page or paragraph numbers at the very least.

Since the researcher and journal “reward systems” are based citation counting it’s understandable (but maybe not productive) that citation volume is favored.

Improved workflow will help with transparency, attribution and granularity but incentives are also important.

Richard Wynne – Rescognito

Comments are closed.