Editor’s note: Today’s post is by Donald Samulack. As President, U.S. Operations of Editage, a division of Cactus Communications, Donald is actively involved in supporting the publication needs of the academic and publishing communities, managing workflow across global time zones, and raising the level of awareness and professionalism of good publication practices, worldwide.

While I do not profess to be an expert, I have often pondered the how’s and why’s of typeface and document layout efficiencies with respect to speed of reading, and the understanding of content from a neurophysiological perspective. The recent post by David Crotty prompted me reflect on the human factors upon which the foundation of publishing is built, and some of the lost learnings regarding the mechanics of publishing that I find intriguing. While I was writing a comment to the post, my reply started to look more like a blog post of its own, and so here it is. There was another post by Kent Anderson that I was reminded of while writing this, and I say honestly, it is a timeless piece on font “authority” that is well worth re-reading.

While I find the study of font design and personality fascinating, I also find the study of font typesetting, as it relates to ease of reading and comprehension, equally fascinating. As a neuroscientist by academic training, I’ve always been interested in how the brain processes external information related to language. Most important to this blog post is how all of this relates to publishing workflows.

typesetting

Many of us who lived through the era of lead typeface and the wet-ink printing press can appreciate various aspects surrounding the science of print. Unfortunately, much of the standardized learnings during the explosion in the use of fonts in the 50s, and the subsequent revision of design, usability, and comprehension theories, as they related to website and electronic screen projection of text in the 80s and 90s, have been lost in the average editorial dialog. We now take many things for granted in publishing, and in everyday life – and we very often don’t take the time to ask why.

As portrayed in the video in David’s recent post, and in the body and comments of Kent’s older post, not only does a font embody and create feeling and authority, it is also known that font design and layout affects speed of reading and the comprehension of the meaning of the content. Font character kerning, angle, thickness, size, serif, ascender, descender, leading, line justification, and a variety of other text attributes have a significant impact on one’s ability to be able to read and comprehend content.

Also, there are many quirks of the English language. For example, did you konw taht you can jlmbue the ltretes of a wrod and siltl usendrntad tiehr minaneg in a sncnetee, as lnog as the frsit and lsat lteerts of the wrod are in the rhgit pcale. While this may have been a challenge to read on first pass, it likely got easier as you worked through the sentence, or in re-reading the sentence. The effect works somewhat, even if the first and last letter rule is not applied (but you would likely need to know context in advance, and it is harder for longer words: idd uoy wnko htta uyo nac lemjbu eth sletetr fo a dwro dna tslli sduendrnta hietr gnminea ni a ncesenet). Word-scramble puzzle enthusiasts will certainly attest that context does matter!

In short, the human brain does not read each letter or word in isolation, but it is constantly interpreting the meaning of the text by assessing font attributes, letter patterns, word associations, sentence structure, punctuation, font color, background, and a number of other visual parameters. As the sentence becomes longer or more complex, it is harder for the brain to make these interpretations. Even white space and margins are important. While a graphic designer may functionally understand all of these things, they may not be fully aware of the human neurophysiology behind them.

It is no wonder that we rely on a spell checker application to help us find spelling mistakes. Our brain may see them, but not pay attention to them (especially if we have read and re-read the text many times in one sitting), since the letter pattern was sufficient to allow interpretation in the context of the word associations and sentence structure provided.

Now, imagine the challenges of an English-as-a-Second-Language (ESL) author, as they write and submit a manuscript to an English language journal. They may not see the spelling mistakes, nor be aware of the existence of grammatical errors. Perhaps this puts our concern over the number of spelling and grammatical errors in their manuscript into perspective, when we should be concentrating on the content (more on this below, as it relates to reading distraction and peer review).

Copy and paste any long paragraph into a blank Word or PowerPoint document and adjust the font type, size, color, line length, line justification, background, or other such applications, and judge for yourself how each of these factors affect readability and comprehension. Note that these are also useful tricks for finding errors in text that you have written, and are too familiar with.

Not only have Times Roman (or similar for print; serif font) and Arial (or similar for electronic screen; sans-serif font) become standards over time, there were reasons for it. In the early days of low-resolution monitors, it was useless to project a serif font, and hence the rise of the sans-serif fonts. Yet, today, with the nearly ubiquitous high-resolution screens, all of this is changing – for example, the fact that The Scholarly Kitchen uses a serif font is not lost on me. Kudos to the Chefs!

With the advent of early word processing software, like Word Perfect, academic manuscript and grant writing with approximately 76 to 80-character line lengths and 12-point font became standard. Such tools, in the early days of personal computing, did not allow for ease of creating columns, and full justification of text was the default. I will argue that as a result, unfortunately, the academic and legal communities adopted fully justified full-page line lengths as a document norm, naïvely thinking that there were advantages. However, any advantages that one may perceive to be achieved by using fully justified full-page lines, fall to the wayside when it is realized that by doing so you are challenging the basic neurophysiology of comprehension.

(Side note: now that we do have more sophisticated software tools, why are people still writing documents this way, when it is time-tested that a left-justified, two-column page, plus white space and imagery, improves comprehension…but I am getting ahead of myself.)

The brain leverages pattern constructs to interpret the world. When reading written language, there are certain rules one needs to learn first. For the English language, one learns to read from left to right, and top to bottom. The brain is trained in this manner from childhood, but it needs cues. The cues we use are sentence anchors (font attributes, letter patterns, word associations, punctuation, etc.), paragraph anchors (line justification, number of lines, line spacing, etc.), and page anchors (white space, images, columnar structure, margins, etc.). Take a look at infant story books and you will immediately see the over-emphasis of these written English-language rules as a strategy to train early reading skills. (Side note: gender and cultural bias in these regards is a topic for a whole other blog post.)

This brings forward another thing to ponder with respect to ESL authors (especially Asian authors) – they are working with a different patterning skill-set from birth, and do not see the English-language patterning errors that native English readers do! This affects their ability to submit a manuscript that is free of spelling and grammatical errors, especially if the author does not have an appropriate English-language dictionary associated with their word processing software.

By writing fully justified full-page lines, you reduce the ability of the reader to not only read quickly, but to comprehend, as well. As I understand it, the time it takes for the eye to saccade from anchors on the right side of the page to the left side, find the next line anchor, interpret the pattern, and continue reading, while adjusting for any related confusion, affects the quality of neuronal input, short-term memory processing and consolidation, and language comprehension. In short, any undue effort required to match and align text anchor patterns is a distraction to the brain’s ability to process and comprehend the information. This effort significantly increases as the line length increases, the interline space decreases (or increases), and the left-right justification (or any other display factor) is altered.

Just for fun, try reading with one eye closed – you would be eliminating certain field-of-vision cues, the advantages of stereoscopic vision, and interrupting the processing in the visual cortex of the brain, by removing half of the overall retinal input. Of the remaining input, approximately one-half of the field of vision of the one eye is processed independently by each half of the brain, and then pieced together (the visual pathways of the brain are designed in amazing ways that are quite different than other sensory systems). As such, the temporal (timing) and spatial (where on the neuron) signaling of neuronal input matters with respect to neuronal processing related to vision, and hence, to the comprehension of text – actually, it matters in all neuronal processing, and in comprehension itself. (For fun, Google images for the search “retinal pathway to the visual cortex” to grasp an appreciation of the complexity.)

All of this has ramifications with respect to what manuscript layout we offer peer-reviewers. Simply put, full-page lines of text (whether at 1.15 or 1.5 line spacing) is hard to read and interpret, and is even harder to read if it is fully justified text, without inserting imagery on the page. Would we be rewarded with better, more intuitive, reviews if we made it easier for peer reviewers to easily read and comprehend what is presented, by optimizing the format of the content provided to them? Perhaps this is a hypothesis worthy of testing and reporting upon at the next Peer Review Congress.

A newspaper or journal article can get away with fully justified text, because the line length is relatively short. Ideally, you would want to allow the eye to manage paragraph patterning by offering a jagged right edge and a uniform left edge – try reading a full page of center-justified text, or text that has a left curvature around an image; it is very difficult, and both comprehension and enjoyment of reading is reduced. (For the graphic designers who love to wrap text around images, please take note; it may look cool to your boss, but you are not helping the reader!)

Another interesting phenomenon that I was introduced to, by Heather Tierney (American Chemical Society), at the recent International Society of Managing and Technical Editors (ISMTE) meeting, is the McGurk effect. Here is a short video (below) demonstrating it (there are several versions of it on the Internet; some with further explanation). Close your eyes when playing the video for the first time. Listen carefully to what you hear. Then watch it again with your eyes open. No matter how many times you experience the effect, the brain interprets the same thing. Have fun testing it with colleagues!

First, let them listen to the video without seeing it. They typically will say that they hear “ba ba, ba ba, ba ba.”

Second, say that you are now going to play another video (but play the same one), and ask them to look at it closely and tell you what they hear. Upon watching, they typically will say that they hear “da da, da da, da da,” despite the fact that the actor is actually mouthing “ga ga, ga ga, ga ga,” with an audio voice-over of “ba ba, ba ba, ba ba.” They will, for sure, have a puzzled look on their face, when you tell them that you just played the same video twice.

In this version of the McGurk effect, the human brain processes the confusion between what is heard and what it is trained to see and interpret as mouth movements, and offers the language perception of “da da, da da, da da” as the input.

The take-home message is that the brain is always interpreting what it sees and hears. In addition to the rules it has learned throughout life, it is constantly assessing external cues, patterns, or in the case of text, what I have referred to as anchors. You can adjust these anchors somewhat, and the brain can compensate to a certain degree. However, if the adjustments start to distract or challenge perception, such that there are conflicts in the temporal or spatial processing of neuronal activity, or extra effort is required for interpretation at the neuronal level, strange things happen, and comprehension is ultimately affected.

For typesetting and print publishing, there are time tested norms. These norms have evolved through a process of natural selection, with the overall selective pressure being comprehension. If we wish to optimize publishing workflows, we need to think in these terms, and always assess how we can increase comprehension at every point in the process.

Parameters of print publication, communication, and language processing are never boring. Yet, I fear that we have forgotten about how fascinating and complex they truly are.

As for other aspects of publishing, don’t get me started on the amazing chemistry and physics associated with the printing press, and the laying down of traditional wet-ink at high speed (or even the more recent toner digital printing process) – it boggles the mind! Perhaps somebody else can take this one on!

Discussion

3 Thoughts on "Guest Post: Reflections on Text and Language Perception, and the Ramifications for Publishing Workflows"

Thanks Don, great post, fascinating … it also prompts me to think of the discussion around the special fonts designed to help people with dyslexia, which seems to get mixed reviews, perhaps more actual research is needed on the items outlined in the post here, or needs to be brought forward to us.

While 2 column format may be good for printed matter, I find the 2 column format much worse when reading online. If I zoom so I can see the entire page, often the fonts are too small for me to read. If I make it so I can see the letters clearly, and can read it, I have to keep moving the page down then up to read the 2 columns. That’s one of the reasons that web-based articles tend to be in one column.

There are several magazines I receive online that haven’t realized this when sending their material out link from email now. I find their material unreadable online, since they have just put their 2 column format online with a reader. Even if they use one column, often their reader goes sideways like a book, so one has to go down and up, instead of a smooth movement of the eye always going down.

The ergonomics of the printed page versus electronic display are very different. Part of the reason is the size and orientation of the viewing screen, but part of the reason may also be backlighting and screen refresh frequency — which may not be consciously perceived, put may be perceived at the level of neuronal processing, hence affecting comprehension (as per the comments in the article related to font and background). A screen refresh frequency of 60 Hz, 75 Hz, or even 120Hz is nothing compared to the speed of synaptic firing and neuronal conduction of neurons in the brain. It is for all of these reasons that usability testing is always suggested to optimize page design — what works in print does not necessarily work in electronic formats. Yet, electronic formats allow us the ability to dynamically display information that we cannot achieve in print. We need to strategically leverage these advantages.

Taking it back to scholarly publishing, many people, including myself have argued the need for usability testing to determine how Instructions for Authors should be displayed online. We all know that such instructions are hardly ever read by the authors. Yet, it is critically important to communicate this information effectively to the authors, to help in the quality of manuscripts received (an author behavioral modification strategy), and to minimize the effort required by the editorial team at every step of the publication process (an editorial efficiency and cost-saving strategy).

In essence, the Instructions for Authors should be an articulation of “author expectations” by the journal (including publication ethics). They should also be the face of “transparency” by the journal with respect to its own processes (mechanisms and decision-making related to manuscript plagiarism checks, editorial triage, desk rejection, peer review processes, cost and timing of publication, and other such processes). Such disclosure is not only important for the brand of the journal (a differentiator and proxy for quality, versus other journals which offer no such disclosures), it should be meant as an up-front dialog between the journal and an author who is thinking of submitting to the journal, essentially stating “if you wish to submit to our journal, this is what we expect you to do, and this is what you should expect from us regarding how your manuscript is going to be handled.”

Given the importance of this dialog for the success of a journal, why is it that more effort is not taken to ensure effective ergonomic design, usability testing, and communication of the Instructions for Authors. Perhaps as an industry, this is one of the first places we begin to apply “communication to ensure comprehension and transparency” strategies!

Comments are closed.