elvis costello, trust,
Trust, Elvis Costello (Photo credit: badgreeb RECORDS)

At industry conferences, seminars, and board meetings around the world, the digital revolution in scholarly communications dominates the conversation. From open access journals to new approaches to peer review, from altmetrics to plagiarism-detecting software, our community has seen a decade or more of rapid change, with no end in sight. You might think that all these changes would affect perceptions of trustworthiness and authority in scholarly communications, but a recent study by the University of Tennessee and the CIBER Research Group* found that – with a few exceptions – that is not the case. Or at least not yet.

The study concludes instead that:

The results … of this long, large and robust investigation confirms what some commentators had suspected, but had little in the way of hard evidence to support their suspicions that the idea, methods and activities associated with trustworthiness in the scholarly environment have not changed fundamentally. In fact, arguably, the main change has been a reinforcement of the established norms in the face of the rapid expansion in scholarly communications and the digital information tsunami that it unleashed. Instead of looking to the future for a lifeboat, researchers have looked to the past and gripped established practices … even more firmly.

One of the study’s main findings is that – perhaps somewhat surprisingly – peer reviewed journals are still the most trusted and preferred vehicle for scholarly communication. If anything, the authors suggest that trust in peer review has increased, though there are clear indications that this is not the case for everyone. So, for example, while life scientists see peer review as critical, young scholars (aged 30 and under) are more likely to also trust other, less traditional forms of scholarly communication, such as social media. They are much more likely to believe that checking to see how many times an article is downloaded and taking account of colleagues’ opinions is important when deciding what they trust as readers, whereas older researchers overwhelmingly see peer review as the most important factor.

Interestingly, a perceived lack of peer review was one of the main arguments the researchers surveyed gave as not wanting to publish in an OA journals. When this perception was corrected in the focus groups, the participants were more willing to trust OA journals, and “distrust also diminished considerably (but did not quite evaporate) in the case of OA journals published by an established publisher.”

Other misunderstandings about OA exposed by the study include a conflation of paying for publication with a lowering of standards. Many respondents believed that:

OA journals were the sole products of a breed of new, not to be trusted publishers, interested in money above all else, when in fact many traditional publishers offer OA journals. This was almost entirely due to experience of the so-called “predatory” journals. Many of those interviewed or engaged in focus groups protested against the constant flow of emails asking for submissions, or inviting the recipient to join an editorial board.

When it comes to deciding which articles they read and cite, however, researchers treat OA journals exactly the same as any other journal they are not familiar with and, here again, peer review is key. If a journal is seen as having a rigorous peer review process, irrespective of publishing model, then it is to be trusted.

But researchers of all ages and disciplines also rely heavily on their personal networks when evaluating which sources to read and cite. For example, three of the top five reasons given for citing a paper were related to personal knowledge (knowing the author, knowing the journal or conference proceedings, and knowing the group that had carried out the research). Older researchers have the upper hand here, of course, because they typically have a wider network of peers and more experience of the main journals and other publishing outlets in their fields. Young researchers may be using social media as a way of expanding and fast-tracking their personal networks and knowledge, as shown by the academic benefits identified by the focus group of early career researchers:

social media a) helped them develop a personal network; b) facilitated collaboration among researchers; c) speeded-up finding fellow researchers to work with; d) useful for keeping in touch with what is going on in the field; e) you could follow authors you were interested in; f) it was easy to find someone with a particular point of view

Abstracts also play a vital role in deciding which articles to read and cite – a trustworthy article is typically associated with a well-written abstract, making it a valuable time-saving selection tool in its own right.

When it comes to deciding where to publish there was near unanimous agreement by participants across all disciplines and ages about which factors are most important – relevance, peer review, being published by a traditional publisher, and being highly cited (in that order). No surprises there. But the fact that 56% of respondents said that they were heavily or somewhat influenced by funder or institutional policy directives or mandates – for example, to publish OA (over two thirds of the 56%) or in high impact factor journals – gives some cause for concern.

Researchers thought that this had a negative impact on creativity and lead to a distortion in where articles really should be placed. It is felt that early career researchers are particularly disadvantaged, because it has got worse over the years. One focus group participant said: “It is a shame they could not choose a journal in which to publish on a fitness for purpose basis; now it was all about IF scores.

Some of the other concerns highlighted in the report include:

  • the increase in poor and mediocre publications – both at the article and journal level – although interestingly most researchers believe that overall the quality of research has improved, which in turn allows them to live with the increasing number of technically competent, but limited interest, papers
  • a perceived increase in unethical practices – mainly seen as an issue for young researchers and social scientists, though there were concerns from all groups of researchers about the ethics of paying to publish and the quality of peer review, “a real trust touchstone for open access publications,” as the authors note
  • inclusion of data and the need for them to be peer reviewed
  • a general lack of awareness or understanding of – and, therefore, trust in – altmetrics, which researchers largely saw as popularity indicators rather than anything more substantive, although young researchers and those in developing world countries were more likely to trust them

Although a minority of researchers reported that they have become much less trusting over the past decade, for example, in terms of being able to associate high impact journals with good science, overall, the report finds that:

Researchers have moved from a print-based system to a digital system, but it has not significantly changed the way they decide what to trust. The digital transition has not led to a digital transformation. Traditional peer review and the journal still hold sway. Measures of establishing trust and authority do not seem to have changed … however, researchers have become more skeptical about a source’s trustworthiness and have developed an increased confidence in their own judgment.

It’s hard to believe that the next generation of researchers will continue to rely on the same old tools to determine which articles and journals to trust in the future. Social media, in particular, is increasingly being used by some groups – especially younger researchers – and even those who haven’t yet embraced it expect it to be a big part of the future, albeit “slowly, selectively, patchily, but surely, as the young and early career researchers move up the academic ladder.” A sign, perhaps, that the long-anticipated tsunami is finally on its way…

One thing is certain – that all of us, whether as publishers, societies, librarians, vendors, or researchers, have an opportunity if not an obligation, to continue to create, develop, and encourage adoption of the new tools that will be needed to support the needs of the scholarly community in future.

*Disclaimer – Wiley was one of several scholarly publishers who participated in this study

Enhanced by Zemanta
Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

42 Thoughts on "In (Digital) Scholarly Communications We Trust?"

This confirms all the research I’ve seen with medical researchers, and then some. The role of social media is a funny one, from what I’ve been able to see. Younger researchers aren’t as aware of its limitations, believe its novelty makes it inherently more interesting than the tried and true, or as this report states are using it to make their own way in the world, one that is more tied by virtual relationships than in the past.

However, as younger researchers advance in their careers and find the limitations of social media, I’ve seen these attitudes change. Out of necessity, profligate social networking narrows to parochial personal networking, as time pressures increase and career trajectories define themselves. Also, social media becomes a tool, but one of diminishing importance and even one that is a threat because it can cause problems for established academics.

It’s also very important to note that “predatory” OA journals are something everyone — OA publishers, non-OA publishers — should align to prevent. They are clearly making people hesitate to adopt OA, and they are a blemish on our profession. Can someone serious in the OA community start one of those great petitions to banish these?

Since the dawn of MySpace, we’ve been hearing that once the “Digital Natives” (or more recently, the “Millennials”) enter the workplace, everything is going to change because they grew up using these new tools. But we haven’t yet seen this happen. As you note, people of different ages have different needs, and as they age, they adopt different tools and workflows to meet those needs. For example, most teenagers consider email something that old people do. Yet by the time they’re 25 and working, they all have email addresses and use them regularly.

David W., below, may be right and this may simply be a question of short term/long term, and that things will eventually change in a few more decades. The question we have to ask though, is whether the mere availability of a tool makes it appropriate for a task, or inevitable for that task. Researchers have long built social networks, but do so slowly and with a greater depth than the shallow networks offered by online social media tools. Over one’s career, you build a network of colleagues, former labmates, advisors, collaborators, classmates, people in your department or those you took specialized courses with. For a trust network, these sorts of deeper connections are likely more meaningful than knowing that person X leaves funny comments on your blog.

For the disenfranchised, social media can indeed provide some avenues to fill needs which aren’t available through the traditional means mentioned above, and for that they do provide value. If you’re at an institution where no one else does similar work or are geographically isolated, shallow connections are better than no connections.

But as we’ve often said on this blog, “Culture trumps technology.” It’s rare for the entry-level workers to drive bottom-up cultural changes in a profession. Silicon Valley may not be a good comparison for what we’re looking at here, given the fair amount of independence a startup has versus a first year graduate student working within a long established academic infrastructure.

In my experience, younger researchers have varying levels of awareness of social media’s limitations. Some have already experienced bullying and trolling during e.g. high school.

It’s now fairly standard for career advisers to suggest proactively searching on google/facebook/twitter to see what is visible to potential employers and collaborators. Some learn the hard way that social media has a long and increasingly permanent memory – they’re shocked to find things they thought were private (perhaps only shared with friends and/or family, or tagged by others) are easily accessible (and archived!) in the public domain.

It’s also becoming common for organisations to have social media policies. Some include limits on what can be said by employees (versus what can be said by a private individual). Blending one’s personal and public persona (e.g by using a single handle) can present challenges f- not to mention blur work/life balance.

Fascinating. Given the age differentials it sounds like the technology driven social revolution is proceeding with due deliberate speed. The standard rule of thumb for this sort of revolution is 30 years. The deeper it is the longer it takes. We are, after all, talking about millions of people and many thousands of firms and institutions. It is more like tectonics than tsunami. The forces are slow but long lived and relentless.

30 years! The pace of technology change is much more rapid. It’s hard to see how any new technology could really take hold when it may be outdated in only a few years.

It sounds like you are confusing products with technologies, which do not become outdated in a few years. For example the Web is 20 years old and it’s effects are still proceeding rapidly. The Internet is over 40 years old. The car took over 80 years to stabilize.

The same scale issues apply to scientific revolutions, which we tend to mistakenly equate with their time of origin, as opposed to when they finally take hold. For example it took well over 100 years for the Copernican model to be widely adopted by astronomers.

When large numbers of people have to change the way they do things it takes a long time, and usually for good reasons because change is expensive.

I’ve often said my journal’s biggest asset is its connection to an established (50 years!) and reputable professional society. We have not seen any large-scale desertion by our authors. Yes, we’ve had to provide modern manuscript handling and wider distribution, along with optional OA for those who require it, but our authors know and trust us. No business has yet found a good substitute for understanding your customers, earning their trust, and serving their needs.

It’s the definition of what’s trustworthy by promotion and tenure committees that make certain criteria so persistent. Right now, few who serve on these committees (senior scholars for the most part) perceive a viable alternative to peer review, publisher reputation and citation stats. These factors also weigh heavily among those who approve grant applications throughout their lifespans. This is what “counts” in the system that rewards and punishes the researcher.
However, this could change very rapidly if a better system of assessing scholarly work should emerge. We think that evolution is slow and it usually is but then there’s this nagging idea of punctuated equilibrium.

In the launch issue of JOPM (Shrager, 2009) I noted the important and almost entirely overlooked role that peer review has as an educational tool: “I feel that these discussions miss what seems to me to be the most important value of peer review: Its role in the education of scientists, and specifically, of highly efficient, precisely targeted, and secure narrow band communication among scientists.” My more general point, though, as indicated in the title of that comment, was that peer review is not sold correctly, and if it were, I believe that many young scientists would come to love, rather than to fear it: “Nowhere else, once scientists leave their final post doc, is there a similar opportunity for direct continuing eduction.” And: “Peer review is only in part a filtering system — and to my mind that is a relatively small part of it value. It is, in addition and more importantly, a highly efficient and secure system of targeted peer education. To ignore this function in changing the way the peer review works is, to my mind, to endanger one of the pillars of scientific communication.” Somewhere someone has suggested that anonymous(*) peer review be made available pre-submission. Although I don’t know how this could be done without flooding people’s time more than it already is, I think that something like that would be very valuable. But actually just separating it from it’s filtering role, and perhaps changing the name to something like “anonymous mentoring” would help greatly.

Shrager, J (2009-11-06) Peer Review: A Love Letter. Comment in JOPM, 1(1). http://ojs.jopm.org/index.php/jpm/comment/view/12/25/10

(*) I feel that the anonymity is important, as others have said elsewhere many times, to permit reviewers to “spank” the reviewee to the extent required.

I enjoyed this post very much. Some questions still remain for me:
1. The peer review process still seems highly variable. It is generally believed that OA journals establish a lower bar for acceptance than other journals.
2. The peer review process itself is quite variable. And given the ability for obviously fraudulent articles to be published it suggests that the actual peer review process may be quite different from the published peer review process mentioned on a website.
3. Is peer review more important than indexing? And how is indexing updated? I am aware of journals that have variable production cycles that vary between months and years or obvious discrepancies between what they do and what they say they do. How is this tracked?

You raise several important points. Let me pick what is, IMHO, the most important: The confusion between peer review and indexing (where here “indexing” = “listed in pubmed” — I realize that this only applies to bioscience, but it’ll do for the present example).

Most bioscientists equate pubmed listing as having been stringently peer reviewed, but even for “above board” publishers, this is not necessarily the case. Consider, for example, PLoS Currents. Although these are index in pubmed, as recently as PLoS Currents: Influenza, which as recently as 2010-03-06, well after many articles had appeared there, stated that it is “moderated by an expert group of influenza researchers, but in the interest of timeliness, does not undergo in-depth peer review.” [1] PLoS has since then backed away from this model (or, giving it the best possible spin, clarified), stating now that “The submissions are reviewed by a group of leading researchers in the field – the Board of Reviewers. The reviewers make a rapid determination as to whether a contribution is intelligible, relevant, ethical and scientifically credible, but will otherwise not impose restrictions on the nature, format or content of the contributions. Those submissions deemed appropriate are posted immediately at PLOS Currents: Outbreaks and publicly archived at PubMed Central.” [2] But there is at least an existence proof that PubMed will allow paper that “do not undergo in-depth peer review.”

Bottom line is: You can’t trust indexing. Period.

[1] http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0010782#pone.0010782-httpknol1

[2] http://currents.plos.org/outbreaks/aims-scope/ accessed 20140507

Good point. In the BioMedical world, inclusion for indexing in MedLine (and hence, PubMed search results) used to set the gold standard. But in recent years those search results now include everything that’s accepted into PubMed Central, which has vastly lower standards than MedLine as its goal is inclusiveness, rather than curation. So what used to be a key indicator of quality no longer has the same value.

But in recent years those search results now include everything that’s accepted into PubMed Central, which has vastly lower standards than MedLine

It is quite straightforward to filter PubMed search results by Medline indexing. PLOS Currents, to take the previous example, has 317 PubMed results and zero Medline ones.

Given that almost no one seems to know the difference between MedLine and PubMed, do you think this is a common practice?

I’m going to try a nested blockquote here; If the CSS doesn’t support them well, I hope it won’t be too confusing.

Given that almost no one seems to know the difference between MedLine and PubMed, do you think this is a common practice?

No: http://kraftylibrarian.com/?p=2093

“Their users (doctors and researchers) do not see the distinction. To them PubMed is MEDLINE.”

This is nothing but an assertion based on a single anecdote of someone who did know the difference. In other words, Ms. Kraft is generalizing from her own lack of knowledge, which is further highlighted by the reference to searching by MeSH terms; if I were to hazard a guess, I’d say this is at least as uncommon as using the right tool for the job.

Anybody with an NCBI account has a very functional interface. Anybody who doesn’t seems unlikely to be a frequent user of PubMed, as merely being able to review recent activity, save searches, and file collections is awfully handy. If anything, it seems more straightforward to suggest that the availability of the tools isn’t adequately publicized than to leap to the conclusion that almost nobody knows the difference in the first place.

Also, for researchers, Medline is missing important non-medical scientific content.

With the Medline filter on, the search results are presented as two tabs: “All (# papers)” and “MEDLINE (# papers).” It’s one click to switch back and forth.

This article may better help explain this incredibly common confusion:
http://scholarlykitchen.sspnet.org/2013/02/14/extension-and-conflation-how-the-nlms-confusing-brands-have-us-all-mixed-up/

This post includes an example of someone from BMJ making this common mistake:
http://scholarlykitchen.sspnet.org/2013/10/08/how-the-nlm-justifies-linking-to-pubmed-central-versions-directly-from-pubmed-search-results-lists/

To add to the anecdotes, I have yet, in 15 years of meeting with editorial boards, to meet one that knew the difference between MedLine, PubMed and PubMed Central. We have written quite a bit about PubMed on this blog. If you search for posts, take a look at the comments and notice how often someone substitutes one name for the other.

Indeed David C. A lot of the discussion here is assuming a great deal more knowledge of the details of publishing than researchers have, or need to have. The study that Alice is reporting on is about impressions, not facts.

I think that “quite straightforward” is a bit of an overstatement.

First of all, you have to know that there is even a difference, which I’ll bet many-to-most bioscientists don’t. Next, as far as I can tell, there’s no way to get to it by just googling for keywords such as SEARCH MEDLINE.

As an experiment, in case I was just missing something, I asked NLM’s help desk how to do this. Here’s what they said:


From: me
Subject: How can I search only medline?

I want to just search medline, not all of pubmed. can you tell me how to do that?


From: Them

There are 2 ways you can do this.

1. add AND medline[sb] to your search strategy
ie. horses AND medline[sb]

2. After performing a search such as horses
on the left side of the results screen Click on the Show Additional Filters link
and choose Journal Categories—- and click then on the Medline link to invoke the limit.

————

I also looked myself up, and then filtered the 328 results, ending up with 305. I identified the missing listings, and they appear, to my eye, to be fairly random. I’m sure that they aren’t random at all, but if you don’t know that this is what you really want to be doing, and you’re afraid of missing something in the literature, you wouldn’t want to be doing this. In the end, you have to go through and evaluate each paper (or at least each abstract) anyway for its applicability, so I’m not sure that bothering to search medline gives you much power (and NLM probably realizes this, which is why it’s buried in a dark corner of the UI).

I don’t know that categorizing it as “buried in a dark corner” is all that appropriate. Exactly the same sort of knowledge is necessary to effectively use WorldCat, for example. If the criterion is that not knowing how to use a database means that the metadata “semantics” are effectively pointless for end users, then one might ask who they are intended for. In my experience, PubMed has a much better search facility than any provided directly by publishers.

To have ascertained the answer to the question that you E-mailed about, you could have gone to PubMed, clicked on “Advanced Search,” and then clicked on “Help.”

Whereas I agree that pubmed’s search is pretty good (and their advanced search is very good … esp. As I helped (very slightly) in its design (even though they didn’t implement my favorite enhancement, which would have made it ultra-terrific! 🙂 their help is terrible. I’m almost shocked that you think anyone would be able to figure this out with any reasonable degree of effort from what can only be described as their help blob. Anyway, I wanted to ask them because I wanted to know whether I was missing some more obvious way to do this, which is a question that the help blob can’t answer. Apparently, I’m not.

BTW, WTF is WorldCat that I’m expected to know in order to figure out pubmed??

BTW, WTF is WorldCat that I’m expected to know in order to figure out pubmed??

WorldCat is the interface to the OCLC’s union catalog. It uses a similar query syntax for those with specific searches in mind, which reflects the underlying tagging.

I did, coincidentally, consult the “help blob” for help with building a custom filter to separately bin Medline and non-Medline results, and yes, the answer is obscure (“pubmednotmedline[sb]”; why it’s not “NOT medline[sb]” is an open question). Given that the response came promptly, I don’t see denigrating the help desk as apropos.

Thanks all for your comments. One of the key points for me is that, as a couple of you have noted, there is no viable alternative to peer review at present and yet there is currently no overall quality control of the peer review process. In particular I’m always somewhat shocked by the fact that although many young researchers are expected – and indeed want – to do peer review (often on behalf of their advisors), they typically receive little if any training. All the sadder since they’d arguably stand to benefit most from this – both as reviewers and reviewed.

Excellent point about training. Systems like ScholarOne Manuscripts usually generate a thank-you note for reviewers once a decision is made. It’s important for an editor to include a summary of the major factors in the decision, so the reviewers know how their work stacked up to others’.

I know of several research societies that have peer review training programs set up for their journals. Novice reviewers can sign up and be paired with an expert reviewer, often an editorial board member, and the two perform a peer review of a submitted paper together, then review the other reviewers’ responses.

This sort of training is often done by one’s mentor, but for those where this is unavailable, it’s something really useful that a research society can provide.

What a wonderful idea! It might require some work to implement, but would build up a loyal cadre of reviewers and authors.

Neither editors nor peer reviewers received any training to speak of. It is all “by the seat of their pants”. Some are good some are not so good.

I’m not sure that’s true. My graduate school mentor did quite a bit of training in peer review. Every time she had a paper to review, she would enlist a graduate student and as an exercise, they would do their own peer review on the paper. This would then be compared to the review that she did and we would discuss. Then when the reviews from the other reviewers came back, we would go back and see what we missed. I also spent many years in classes where careful reviews of published papers were the majority of the curriculum.

I’m sure that some people are very good about training and mentoring their students on peer review but, having spoken at and attended several meetings of young scientists organized by Sense About Science, there are typically only a couple of attendees out of the 40+ have ever had any formal peer review training. So I think there is still a lot of room for improvement.

Being able to read and interpret the literature is an important part of a graduate education. I do realize that some programs and mentors fall short on this vital training aspect, and that’s why I think it’s a useful area where research societies, particularly those with their own journals, can offer helpful training to their community. One you might want to look into is the American Headache Society who publish with your company, Wiley. They have a really interesting program for peer review training for their medical fellows.

It would appear that this study focused solely on journal publishing. I wonder why? Surely, the issue of trust is equally important in scholarly book publishing. Some changes in peer-review practices are under way in scholarly book publishing. Of course, citation stats play no role in this arena. Book reviews take their place.

Hi Sandy, I checked with the authors of the study and they confirmed that there was no mention of books or journals in the questions, just publications – so respondents (in the survey, focus groups, and interviews) self-selected journal articles as the dominant vehicle of trust in scholarly communications.

Oh dear, one hopes the vagueness of the questions does not render the results ambiguous. Then too, interviews and especially focus groups tend to be directed by the leader. Perhaps concept confusion is the message. There is certainly enough of it going around, which is characteristic of revolutions like this as established concepts go out of focus, as it were.

“When it comes to deciding where to publish there was near unanimous agreement … about which factors are most important – relevance, peer review, being published by a traditional publisher, and being highly cited (in that order).”

I have to say, I find that very hard to believe. I would guess that perceived journal prestige (which may or may not be as simple as impact factor) would be at the top of the list, rather than missing entirely!

For some reason I can’t reply to the comment inline, so to is out of band, and should be read as a reply to Mr. Ogon, who wrote:

“I did, coincidentally, consult the “help blob” for help with building a custom filter to separately bin Medline and non-Medline results, and yes, the answer is obscure (“pubmednotmedline[sb]“; why it’s not “NOT medline[sb]” is an open question). Given that the response came promptly, I don’t see denigrating the help desk as apropos.”

Just for the record, I wasn’t denigrating the help desk. If I was denigrating anyone (which was not my intent, of course, but merely to point out the obscurity of the method in both UI and help text) it would be the authors of the help blob, although maybe that’s who you meant to refer to by my denigration of them. (I sort of want the French grammar there as: “…my them denigration…” 🙂

Comments are closed.