I have a love-hate relationship with copy editors. I love when they fix the atrocious writing of others but hate when they start mucking with my own prose.
My co-author and I recently spent hours undoing the damage of a copy editor, who insisted on changing perfectly good English into writing that was unrecognizable to us. We wasted time arguing the semantic differences between “since” and “because,” and how rewriting someone’s sentence can lead to a different meaning of the results. The end of the relationship came after the copy editor insisted on spelling out acronyms when they were the proper name of journals, like PLoS Biology.
You can’t argue with someone with a stylebook.
As for love, I must disclose that I was once in love with a copy editor. Her photographic memory was perfect for finding discrepancies separated by hundreds of pages of text. She was also prone to finding inconsistencies in my words over several weeks. “Did I really say that?” I discovered that it was useless playing Ronald Reagan when she was unwilling to play Nancy. Like a manuscript, I was rejected.
Now where was I?
There are many who argue that copy editing adds real value to published products, although these claims usually come from copy editors themselves. Some publishers have given up on copy editing entirely. Others apply a light touch to the title and abstract but go no further. Editors are keen to push authors into the hands of a rapidly growing external academic writing and editing industry rather than absorbing the costs of copy editing themselves.
In the end, I suppose that it doesn’t matter what copy editors believe. The market will decide for them.
In a recent piece in the April print edition of Against the Grain (“Copyediting’s Role in an Open-Access World”), Sandy Thatcher, Director Emeritus of the Penn State Press, decided to find out exactly how much copy editors add to publications in the humanities and social sciences.
Thatcher entrusted a few copy editors to compare a small group of final author manuscripts deposited in Harvard University’s DASH (Digital Access to Scholarship at Harvard) repository with their published versions. Many DASH records include links to the publisher’s version of record.
He reported that most errors were minor, such as spelling errors, subject/verb disagreements, dangling modifiers, and others Thatcher calls “stylistic infelicities.” His editors also spotted more important problems in the author manuscript, such as quotation errors, citation errors, and errors in tables and figures. One editor came up with a more disconcerting error in an author version: The omission of author identity details, conflict of interest statement, funding information, and an acknowledgment section.
While Thatcher is clear to point out that most copy editor corrections were minor and made to improve the clarity of the language or to enforce a “house style,” he does point out the problem of errors in citation and quotation and how these types of mistakes have a way of being perpetuated in the scholarly record.
For me, the lack of author declaration and conflict of interest statements from an author manuscript are more problematic, as these omissions change the interpretation of a document. I’m much more concerned about enforcing transparency over style.
Institutional repositories are not set up as editorial offices. Those working to create a working digital repository or entrusted to solicit manuscripts from publishing faculty are not in the position to check for errors or omissions. Their roles are to gather up author manuscripts and make them publicly available. Behind this work rests the assumption that these manuscripts are “good enough” for public consumption.
Thatcher’s small but detailed analysis shows that, for the most part, author manuscripts may indeed be good enough for public consumption, but we should acknowledge some caveats with his study:
- First, his study was conducted on the output of the most prestigious research institution in the world. We should acknowledge that Harvard scholars — representing some of the leaders of their fields — are also highly skilled in creating accurate, well-written manuscripts. Had Thatcher selected a different repository, we may have seen more errors.
- Second, Harvard is located in an English-speaking country. Even if some of its scholars emanate from the non-English speaking world, we assume that they are competent enough with the language to make it to Harvard and join their faculty.
- Last, DASH records include a link to the published version of the document. While DASH does its best to point to the authoritative version of the article, it becomes incumbent upon the reader to find out if significant changes were made to the article. I am unaware of repositories, other than PubMed Central, that link article correction and retraction statements to a deposited article.
Depending on how you interpret Thatcher’s findings, “good enough” may only be good enough for Harvard University.
Two papers published in 2007 in Learned Publishing also investigated the changes between author manuscripts and final published versions. Ed Wates and Bob Campbell (“Author’s version vs. publisher’s version: an analysis of the copy-editing function”) tracked changes in 189 articles published in 23 journals — 145 articles from STM journals; 44 articles from humanities and social science journals. Most of the changes (42.7%) they report had to do with incorrect or missing references. A total of 34.5% of changes dealt with typographical, grammatical, or stylistic changes; 13.6% of changes dealt with missing data; and 5.5% concerned textual changes that altered the meaning of the text. They write:
None of these materially altered the conclusions of an article, which is more the purview of the peer-review process, but they did produce a more consistent and accurate article of record. This is particularly important in the electronic environment, where accuracy of linking, for example, could be critical in establishing correct citation data.
In a later issue of Learned Publishing, David Goodman and others compared 24 self-archived author manuscripts in biochemistry and the social sciences with the published version (“Open access and accuracy: author-archived manuscripts vs. published articles”). While they did not investigate citation errors, their results were similar to those of Wates and Campbell. Some of the author manuscripts were identical to the published version, suggesting that the publisher was not providing copy editing services. In others, where the English was deficient, the copy editor rewrote much of the article. While most editorial changes aided in the readability of the paper, in some cases Goodman preferred the author’s manuscript. He writes:
While differences were indeed found in article pair comparisons, none was serious enough to invalidate significant data, conclusions, or overall validity of the findings; none would warrant a published correction or retraction.
We should remind ourselves that all three studies investigate the changes that take place after a manuscript has passed through peer review. The purpose of copy editing is not to detect serious flaws in theory, methodology, analysis or interpretation — that is the responsibility of peer review — but simply to make a paper more consistent and readable. We should therefore not expect to find fatal errors at the copy editing stage, as implied in the Goodman study.
Still, we are left hanging on whether copy editing sufficiently improves an academic article to justify its persistence, and ultimately this question rests on whether you value its services.
Are accepted author manuscripts “good enough” for public consumption?