Confusion (Photo credit: Wikipedia)

My purpose here is to briefly explain a taxonomy of confusions which I developed years ago but never published. The original focus was rule making documents like laws, regulations, standards, and instructions, but it mostly works for any expository writing. Since many of our readers are authors and editors, I thought they might find it useful, since clear communication is one of the major goals of STM publishing. It may also apply to people, but more on that later.

The taxonomy of confusions is in the form of a matrix, the use of which I call “coherence analysis” (see figure). There are six columns in the matrix, each denoting a core feature of the writing. These core features are the concepts, rules, procedures, text, structure, and logic, each of which I will briefly explain below. There are also 21 rows, giving a total of 126 combinations. Each row denotes a type of confusion causing factor, one which may occur with any of the core features. Most of these factors are relatively self explanatory.

Coherence matrix
Coherence Matrix, 2013

The utility of the coherence analysis matrix is its precision. That is, rather than just saying that a document is confusing, one can point out a number of specific reasons out of many possibilities.   In typical cases, there are only a few specific major confusion-causing factors for any given document. Thus, coherence analysis is a diagnostic system. Note too that confusion simply means the potential for misunderstanding. I am not doing psychology.

The six core features are explained as follows; each is intended to be relatively simple. The concepts feature refers to the meanings of the actual words and phrases used. Every expository document features some specific central concepts. These determine what the document is about. Moreover, in rule-making and in STM, one is often coining new language and defining new terms, hence creating new concepts.

The rules feature refers to the rules being articulated in the document. In rule-making, these are always central, but in other expository cases, they might not be. In STM, there are often rules involved, such as scientific laws, principles, and cause-effect relations — or nature’s rules, as it were. The procedures feature likewise refers to procedures that are articulated in the document, which is common in rule-making. In STM, specifying one’s procedures is often central as well.

The text feature refers to the actual text — the words and sentences, apart from those aspects captured by other features. The structures feature refers to the arrangement of the text, including things like paragraphs, sections, and chapters, as well as the use of headings, titles, abstracts, figures, and footnotes.

The logic feature refers to the reasoning that occurs in the document. In rule-making, there is often a reasoned explanation of how the rules do what they are intended to do. In STM, there is often a lot of reasoning, especially in drawing conclusions. In other expository writing, there may be just a statement of facts with little reasoning provided.

As for the 21 potential confusion-causing factors, most are pretty self-evident but some are rather technical. Lack of balance (#4) refers to giving a lot of attention to one topic but little to another that is equally important. Opaque cross references (#19) refers to including references in sentences such that one cannot know the meaning of the sentence without knowing what the references say. A lot of legislation is written this way, making it virtually unintelligible. The matrix problem (#20) refers to simple-looking sentences which actually create large, multi-dimensional, combinatorial complexities.

So far, we have been basically talking about the science of readability. In closing, I would like to offer a conjecture about human cognition, one which I have only occasionally got close to. The idea is that each of us has two traits, which I call coherence profiles. That is, each of us tends to confuse others in certain specific ways (our output profile), and each of us tends to be readily confused in certain specific ways (our input profile).

For example, some people frequently use examples to state general principles while others are confused by this practice. For a given person, their two profiles will probably be quite different because the output profile is stuff they do too well while the input profile consists of weaknesses. That is, one is not likely to be confused by patterns of speaking and writing that they use frequently.

The interesting case is where someone who is especially confusing in a specific way tries to communicate with someone who is especially confused in that same way. I think this is the all-too-familiar case where someone simply cannot understand someone else. In the same way, some people may find certain textbooks hard to understand because of a mismatch of coherence profiles. I suggest using two different textbooks when possible because it’s unlikely that both will have the same coherence profile. Having a teacher that one cannot understand is a deeper problem.

Coherence analysis is not about nitpicking sentences or trying to find as many confusions as possible. It is about seeing the personality of the document, the underlying tendency to introduce certain confusions; that is the coherence profile. As a rule of thumb, we used to look for just the five or six biggest confusions. This does take practice, but with time it gets easy. I see specific confusions everywhere, but it helps me understand by preventing misunderstanding. The basic point is that understanding is not easy; this is why we have editors.

In any case, I hope some of our readers will try out the coherence analysis matrix. I will be happy to hear from them.

Enhanced by Zemanta


4 Thoughts on "A Taxonomy of Confusions"

A confusion matrix with 126 variables?!? For a second I wondered if I was inadvertantly on The Onion’s site.

It is a bit boggling but the complexity of human reasoning is my thing. See my little textbook:

A little history may help. My training is in concept analysis, the method of analytic philosophy developed by Russell, Wittgenstein and others, which looks for confusions. I had been analyzing technical documents for several years, mostly policy, legislative, engineering and regulatory documents, and was starting to see recurring patterns of confusion causation. I took a 3 week horse packing trip in the Canadian Rockies with nothing to do all day but look at magnificent scenery and think. So this taxonomy of confusions came to me. I almost missed the post-ride party sketching it out. This is what happens when you mix philosophy with engineering and horses.

Comments are closed.