A recent study by John J. Regazzi and Selenay Aytac published in Learned Publishing proposes to discern the attributes authors most closely associate with journal quality.

Now, I wish I could write flattering things about this study. Instead, I’ve been worried that I should just stifle myself lest I be accused of being a meanie. As you’ll see, reputation matters, and I don’t want the reputation of being mean. So, let me take you through this gradually so you can see that I’m not being mean, just dealing with a study that left me underwhelmed and a little frustrated.

The authors use a “triangulation” approach (three different research methods) to discern journal quality measures that matter to researchers. They admit they had constraints of “time, cost, and sample size.”

The authors sampled a total of 13 people (7 from a computer sciences discipline, 6 from health) from their own institution (Long Island University). This was a convenience sample, as the research euphemism goes. These 13 people were surveyed, then took part in a focus group, and then, if they hadn’t had enough, they were interviewed. Only 5/13 stayed on for the interviews.

So, 13 people were asked to answer written questions, then asked similar questions in a focus group, and then ~1/3 of them stayed on for interviews.

The authors list 16 quality characteristics they claim to have found in the literature. Looking over this list, you start to see how this is a conflated set of characteristics — for instance, reputation, impact factor, rejection rate, editorial board, and society-published are prone to skew to the most general of the set (reputation) since the others are specific relative to the proxy. So, in fact, you have one summarizing characteristic (reputation) that suffices to encompass the others. It’s bound to score higher.

Then, on what appears to me to be a small and localized data set (thrice-processed opinions — “triangulation” turns out to mean asking the same people something three times), I find the authors rolling out 5 tables, 2 figures, and 7.5 pages of discussion. On top of this, they estimate p-values and invoke Cronbach’s alpha. I thought this was all a bit much, especially in a qualitative study.

Not everything we study has to be subjected to the scientific method, or some imitation of it. Sometimes, good old common sense works just fine.

And that’s what ultimately comes from this small, local study of faculty in computer sciences and health — researchers want to publish in journals with great reputations that get their papers out quickly and reach a lot of readers. Computer science people don’t care as much about online submission tools, but health researchers want them.

So, back to the theme of reputation. Why did I read this article? It was forwarded to me in an email, and when I saw it was from Learned Publishing, I figured it had to be good. Well, even they don’t bat 1.000, I guess. The person who forwarded it to me has a great reputation, but this particular article wasn’t to my liking. Both reputations are well-preserved. I still think Learned Publishing is a great journal, and that my colleague/friend has great taste.

Reputation is transmissible. Phil Davis mentions the concept of “brand coat-tails” in a recent post, showing how PLoS has used two good journals’ reputations to extend acceptance of a repository journal.

Because reputation is transmissible, researchers want to publish with high-reputation journals. They want some of that reputation for themselves. It’s a double-edge concept — reputation becomes akin to a standard.

Phil Davis (once again) suggests something along these lines in a comment on an earlier post on this blog. Phil notes a 1978 study that showed how the act of citation is akin to communal signaling. I would stretch this a little more by stating that reputation is a different form and culmination of communal signaling. By publishing in Learned Publishing, these authors signaled their audience and research interests. And by publishing this paper, Learned Publishing said something about a topic it has consistent interest in exploring. So, despite this being a small study with a conveniece sample and overheated interpretation, it fits for both the researchers and the journal.

It sends the right communal signals, reputation-wise.

Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

5 Thoughts on "Reputation Matters"

Is Kent Anderson a big meanie!

Yes, but not for panning a weak paper published in Learned Publishing.

As someone who reviews for several journals in library science, the Regazzi and Aytac paper is quite typical; in fact, it is probably better than your average paper.

What is unusual about Kent blogging about the Regazzi and Aytac paper is that negative (or critical) citations are unusual in the sciences. Papers that are considered unimportant will simply go uncited. [1,2]

A. J. Meadows writes that “the scientific community does not normally go out of its way to refute incorrect results […] If incorrect results stand in the way of the future development of a subject, or if they contradict work in which someone else has a vested interest, then it may become necessary to launch a frontal attack […] Otherwise, it generally takes less time and energy to bypass erroneous material, and simply allow it to fade into obscurity”[3]

As the old saying goes, “there is no such thing as bad publicity.” By drawing attention to this article, you did what all academics (and media stars) crave — attention.

References
[1] Cole, Jonathan R., and Stephen Cole. “Measuring the Quality of Sociological Research.” American Sociologist 6, no. 1 (1971): 23-29.
[2] Cole, Jonathan R., and Stephen Cole. Social Stratification in Science. Chicago: University of Chicago Press, 1973.
[3]Meadows, A.J. Communication in Science. London: Butterworths, 1974.

Phil brings up a good point, and guesses correctly that part of my debate covering this study was that it would draw attention to it. However, unlike academic communication of old with citations at the core, blogging allows for more discursive writing. So, part of what led me to write about that study was that it hit on a topic that I was seeing elsewhere in various ways. It provided an excuse to tackle the broader topic.

Also, it let me highlight a comment Phil left on another post, a comment that I thought deserved a bit more prominence. We’re still trying to figure out how to make comments more prominent on this blog. So far, we have more comments than posts, probably a good barometer that the blog is fulfilling its purpose to some degree. We just need to make the comments more prominent.

I would say that once you have a responsive community of commentators on your posts, you need an “editorial page”: i.e., a page that highlights letters as the focus, rather than the stories the letters comment on (of course each hyperlinks to the other).

When we implemented “e-letters” in the BMJ — back in 1998, as I recall! — we knew there would be lots of letters. More letters than articles, we designers were told!

So we tried to implement something like a “letters to the editor” page, with an index for quick scanning:
http://www.bmj.com/cgi/eletters?lookup=by_date&days=2
(Of course, many things could be done with css and layers now…)

I’ve seen some blog software with sidebars — on the individual article pages — that show recent comments. Perhaps WordPress has that in some style that you can try out. The reason it is important on the article pages (it is already on the blog’s home page) is that some people (myself included) go to individual blog posts on email referrals and so never see the home page. Just like in journals…

Glad that there are lots of comments, and I hope to read more of them!

As usual, Kent was more thorough than I. I was “the friend”, and glanced at the article and thought it was interesting, though small.

I’m also in a weird position, because as the chair of SSP’s Publications & Research committee I want to support this blog by bringing stuff to the writers’ attention, but I don’t want to appear to be interfering in editorial matters by shoving stuff down our editor’s throats.

I don’t think it’s unusual in our field to see papers published based on case studies (or anecdotal data). We present this kind of material at our conferences as well.

It isn’t the same type of research we expect from academic journals. But we can still learn from it if we read critically, which Kent is forcing us to do.

John has hit on a classic usability issue we’ve all experienced, when something exists but users don’t notice. We’ve had a Comments widget on this blog since the beginning, but it blends in and isn’t noticed.

The challenge remains. Having a growing list of comments is a good problem to have, and we get great comments on this blog.

Comments are closed.