Jeffrey Beall
Image via Jeffrey Beall

[Note from Joe Esposito: Not long ago Jeffrey Beall took a swipe at the Scholarly Kitchen. The consternation of my fellow Chefs was evident in the discussion that followed: What’s he getting at? What motivates him? Why is he doing this? Rather than speculate, I thought it would be a good idea to allow Beall to speak in his own voice. The interview below was conducted via email. Beall reviewed all final questions and responses.]

Esposito:  What first drew your interest to open access (OA) publishing and caused you to study it?

Beall:  I first became interested in questionable journals and publishers in 2008, when, as an assistant professor on tenure track, I began to receive ungrammatical spam emails from fishy-looking gold open access publishers, publishers I had never heard of before. I used to print them out and keep the printouts in a blue folder. I eventually drew up a short list of the suspicious publishers (this was really before mega-journals had appeared) and quietly published the list on an old blog I had.

Esposito: At what point did you come up with the term “predatory” to describe the fishy-looking publishers?

Beall: In 2010. I first used the term in this article published in a journal called The Charleston Advisor.

Esposito: In that paper you write:

“These publishers are predatory because their mission is not to promote, preserve, and make available scholarship; instead, their mission is to exploit the author-pays, Open-Access model for their own profit.”

Your formulation seems to leave open the possibility of Gold open access publication that is not exploitative. Is that indeed your point of view?

Beall: Correct, in theory, there’s nothing really wrong with the gold open access model, and there are numerous examples of it working well. While the model does have a built-in conflict of interest (more papers accepted leads to more revenue), it’s the exploitation of the model for gratuitous profit that is of concern, and not so much the model itself. There are many hundreds of OA journals and publishers that are not on my lists.

Esposito: Could you provide some examples of Gold OA journals that subscribe to good principles for publishing? That is, what are some journals that are not predatory, in your view?

Beall: The particular niche I’ve carved out involves identifying predatory or otherwise low-quality or deceptive scholarly journals. Although I receive many requests to identify good or high-quality journals, I choose to leave this identification to others, especially those in the particular fields the journals represent.

Esposito: You have been criticized for supporting a blacklist instead of working toward a whitelist. Do you have any views of the relative merits of blacklists and whitelists?

Beall: I’ve had lots of conversations about the strengths and weaknesses of journal whitelists and blacklists, and every one has been interesting. Both approaches have their strengths and weaknesses. There are examples where whitelists have been shown to be monumental failures. For example, the Bohannon sting in Science two years ago found that 45% of a sample of publishers included in DOAJ accepted a bogus paper submitted for publication. I know that DOAJ has tried to make improvements, but in fact, in my opinion, it’s never really recovered from this telling, major failure.

Because you’re not an academic yourself, you may not realize or understand the amount of spam that researchers receive today. They are bombarded with spam emails from predatory publishers, many of whom are easily able to defeat spam filters. For those needing to eliminate questionable or low-quality journals or publishers from consideration, a blacklist has great value as a time-saving device. You can quickly check whether a journal’s publisher is on the list, and if it is, you can immediately remove it from consideration, saving valuable time. My lists are used by governments and universities and colleges around the world and are found especially valuable in developing countries, where predatory publishers especially target researchers.

Esposito: Have you codified the criteria for evaluation of a journal before putting it onto your list? If you have, are the criteria publicly available?

Beall:  Yes, the criteria, currently in the 3rd edition, are available here.

Esposito: The current version of that document was posted a year ago, yet you are often criticized for not being transparent about your practices. Have your practices changed over the years? Have you been listening to your critics and modifying your practices where you saw a reason to?

Beall: My work has benefited from the help, assistance, and guidance of many valuable mentors over the years. I’ve gotten tremendous support, much of it given quietly, and I am very grateful for it. I receive emails almost daily thanking me for my work.

The criteria document, now in its third edition, reflects changes in scholarly open access publishing and the evaluation and criticism of it.

In most cases, the evaluation of predatory publishers and journals is easy and obvious, and there is no disagreement. For example, if an open access journal promises a one-week peer review and falsely claims to have an impact factor, few will disagree that it should be flagged.

Your repeated references to unnamed critics are fallacious. You’re begging the question of whether they or their arguments are credible. Predatory journals and publishers are hurting science and corrupting scholarly communication.

Esposito: You criticized DOAJ for including publishers you termed predatory. Subsequently DOAJ changed its guidelines for inclusion, but there was never any acknowledgment of your role in this. What is your view of DOAJ as it is currently constituted? Do you think DOAJ has been listening to you and learning, but failing to make an acknowledgment?

Beall: I don’t think DOAJ made any decisions or changed their policies based on anything I said or did. I think they tightened up their inclusion criteria as a result of the Bohannon sting and not because of me. For information on whether DOAJ has been listening, I would refer you to them. But in point of fact, I have not been speaking to DOAJ — we have no dialog.

DOAJ has been victimized by predatory publishers. The idea of creating a directory of open access journals is a good one. Predatory publishers are experts at appearing like legitimate publishers, and many have been fooled or misled by them (victimized by them, essentially), including the compilers of directories or other similar databases.

Esposito: If you could change any one thing in scholarly communications — say, by announcing a policy that everybody would adhere to — what would that one thing be? You are welcome to offer more than one idea.

Beall: Easy: we need to end the system of payments from authors. Author-financed scholarly publishing is corrupting scholarly communication.

Esposito: I want to be sure I understand you on this point. To an earlier question you replied that although you focus on identifying OA publishers of little or no merit, you believed that there are useful OA venues. But your response just now seems to suggest that all Gold OA is a bad thing. Can you clarify your position?

Beall: I stand by both statements. I know some would love to catch me in a contradiction and declare victory, but some things are ambiguous, and at universities we specialize in dealing with ambiguities and uncertainties.

You brought up the concept of self-contradiction, so I am reminded that in late 2013 you authored a mean and hurtful blog post in The Scholarly Kitchen entitled “Parting Company with Jeffrey Beall.” Why are you communicating with me now after so firmly declaring an intention to end contact with me?

Esposito: Gold OA now captures about 3 percent of total revenues for journals. It is growing. Do you see it reaching a plateau at some point or even declining, or will the growth continue?

Beall: I’m sorry — I am not really qualified to answer this question. I would refer you to someone at STM or Outsell. I am focused on helping researchers avoid being victimized by bogus and corrupt open access publishers and journals and not on making industry forecasts.

What’s the source for that statistic, anyway? Does it include all the revenue earned by the thousands of journals on my lists, including all those based in South Asia and West Africa? I suspect not. Most research on OA journals excludes the journals on my lists and instead exclusively uses DOAJ as a source of titles to study, so most studies on OA don’t tell the whole story.

Esposito: As Gold OA does not involve the curatorial activity of a library, what changes has the advent of OA brought about in a library’s operations?

Beall: Actually, this is a key question. I think I’ve read your comments about scholarly open access publishing disintermediating academic libraries, and I agree. No longer stewards of physical collections, academic librarians have to find new ways to add value to information in the college and university context.

One of the ways that we’re doing this is by helping faculty, students, and post-docs navigate the entire research process, from initial literature review to final publication of the research results in a journal or monograph.

As you know, there are many corrupt and low-quality businesses appearing, firms offering services to researchers at different places along the research cycle, with predatory publishers among the most salient of these. The particular niche I’ve carved out is to help researchers avoid being victimized by such publishers, and many librarians have assisted me in this, and I am grateful for their help. Other academic libraries provide the same service using different methods.

As the role of consumer switches from libraries to researchers, academic librarians have the opportunity to share valuable skills and information with university researchers.

Esposito: What policies can be implemented on an institutional level to identify and marginalize, and perhaps to eliminate, predatory publishers?

Beall: Sir, I am not a specialist in higher education policy, so I cannot provide a complete answer to this question. All I know is that there are predatory publishers and journals that are victimizing researchers, and I am doing all I can to get the word out and help researchers avoid being hurt by them.

I do know that there are academic departments, colleges, and universities — and even a few governments — that find my lists valuable and use them for evaluation purposes, i.e., as a component of their policies. Many individuals use them as well.

Esposito: What didn’t I ask you that you would like to comment on?

Beall: Here are two things that I think are important that I don’t think we’ve discussed:

  1. Predatory journals and the threat to the integrity of science.

South African researcher Nicoli Nattrass writes about the concept she calls the “imprimatur of science,” namely, the process through which a scientific journal grants science’s seal of approval to the articles it publishes. This means that scholarly publishers are expected to enforce demarcation and only allow vetted science to be published. An easy example is astrology; no legitimate journal would publish articles purporting a scientific basis to astrology, for it’s a pseudo-science.

However, the line isn’t always so clear, but it’s still the role of journals to enforce the line and not allow pseudo-science to be published bearing the “imprimatur of science.”

But, as you surely know, predatory and low-quality journals are granting the imprimatur of science to basically any idea for which the author is willing to write an article and pay the author fees. This is polluting the scientific record with junk science, and demarcation has essentially failed. I believe this will worsen in time and the notion of what constitutes valid science and what isn’t will become increasingly vague. Moreover, journalists will report on bogus science, covering it as authentic science to their readers and viewers (cf. the recent Johannes Bohannon chocolate study), and scholarly indexes, such as Google Scholar, will include the junk science among the works they index, ruining the cumulative nature of research.

2. I think that the scholarly publishing industry has failed science and scientists by allowing the predatory publishers to proliferate so much, but the open access movement also shares the blame for this.

There are organizations that represent the interests of publishers, and there are organizations that represent the interests of journal editors, but there are none that represent the interests of scholarly authors, those who now increasingly are the consumers of scholarly publishing services (and this relates to the disintermediation of academic libraries, formerly the chief consumers of scholarly content, content that is now largely given away for free). There is no “consumers union” for scholarly authors, yet they are, collectively, chiefly the ones paying for scholarly publishing-related services.

I’ve tried to help with this by advising authors on which journals and publishers they should avoid, but more work in this area is needed. An organization may be needed.

I have been observing recently that the number of people who make their living through scholarly open access publishing is increasing, perhaps reaching a tipping point, so that more individuals in the scholarly publishing industry earn their living through payments from authors than payments from academic libraries. This means that more individuals are more ardently demanding OA because their livelihoods depend on it. Their salaries inform their ideology, and they’re vocal and powerful. Thus, if subscriptions to scholarly journals collapse, we will see increased scholarly publishing chaos, amid calls for even more payments from authors.

In the scholarly open access segment of the scholarly publishing industry, we are seeing that the most prosperous publishers are the larger ones, those able to offshore their production work. Hindawi (in Egypt) and MDPI (with most of its work done in China) are two examples. I think the industry will continue to select for publishers like these, meaning many production-related jobs in North America and Europe will move to South Asia and East Asia. So the future of the scholarly publishing industry looks very much like the textile industry, with most production moved to low-wage countries.

 

 

Joseph Esposito

Joseph Esposito

Joe Esposito is a management consultant for the publishing and digital services industries. Joe focuses on organizational strategy and new business development. He is active in both the for-profit and not-for-profit areas.

Discussion

74 Thoughts on "An Interview with Jeffrey Beall"

I’m a bit disappointed that he didn’t address that blog post where he indeed did take a swipe at one of our writers. The post was fairly unclear, a declaration that people with advanced degrees should not be allowed to participate in the scholarly publishing industry for unexplained reasons. It did raise an interesting question though, about the role of having such a degree when you work in our industry, which we’ll be addressing in an upcoming Ask The Chefs post.

I have to say that I, too, remain baffled by the reasons behind this supposition that scientists should not participate in the business of scientific publishing (who else should do it then!?), and Prof Beall’s apparent opinion that researchers who make a sideways move away from the bench are somehow failures as scientists. I look forward to the Ask The Chefs post addressing this.

Is it really necessary to respond to those who use their status rather than their deeds to belittle others? Perhaps it may be more constructive to ask whether a librarian should be lecturing others on their career trajectory?

Phil, it sounds like you’re suggesting that one’s status as a librarian undermines one’s authority in providing career advice. Care to clarify?

You think librarians really ought to provide career advice to scientists (or anyone else for that matter)? Perhaps they can provide good career advice on whether pursuing an MLS and becoming a librarian. Beyond that, I’m not so sure.

What I don’t understand is why you’re singling out librarians here. Should biologists provide career advice to anyone? Or should consultants? It seems to me that just about any professional might be able to provide good career advice to just about anyone else, depending on who the two people are and what they know.

Rick, the comments Beall made about scientists leaving academia to go into publishing was the impetus behind the interview and the thread about him dispensing career track advice. I’m simply pointing out the audacity of his claims.

The discussions of black-listed OA journals typically center on the predatory publishing side of the issue, rather than the feckless author side. While I imagine some authors do get snookered, I have to think a good many know full well what they are doing: paying a fee to get their work published without the tedium of peer review. In these instances, it might be more symbiotic than predatory.

I’ve had Google Scholar send me alerts for articles in my domain that I recognized as having had negatively reviewed and which had been subsequently been rejected by the journal. Why do authors then resort to publishing in black-listed OA journals? Did they feel stonewalled by the mainstream? Are they just feckless? Because they assume uncritical readers, and if they say they published in a scientific-sounding outlet that’s sufficient? Is Google Scholar so indiscriminate that if it looks like a paper, it gets indexed regardless of source? Seems to me the author side of the sketchy OA market would be interesting to survey, should research librarians have opportunities to suggest thesis topics (hint hint).

While I imagine some authors do get snookered, I have to think a good many know full well what they are doing: paying a fee to get their work published without the tedium of peer review.

For what it’s worth, this is actually an issue that we’ve discussed here in the Kitchen.

I find many of Beall’s responses rather odd. While referring to himself as an academic (an assistant professor on tenure track) he explicitly avoids taking an academic approach to answering Joe’s questions, positioning himself instead as an advocate for victims. At the same time, he is disdainful of Joe for not being an academic himself, “Because you’re not an academic, you may not realize or understand…”

As a reader, it appears that Beall reveals in his self-aggrandizing status as an academic but doesn’t believe he needs to act as an academic. I’m sorry, but he can’t have it both ways.

Interesting, as I didn’t interpret it the same way. “Assistant professor on tenure track” was in the past tense, stating what he was doing in 2008.

The statement “Because you’re not an academic…” was about the amount of spam academic authors get, in comparison with the interviewer. I have no idea if publishers get spammed. But the statement definitely resonated with me, as I’ve been getting at least 1 email every week since I published my first paper as corresponding author in 2013.

A few details of Beall’s argument confuse me. Firstly, why does he think it is the role of legitimate OA publishers to deal with the problem of predatory journals, and apart from marching over to their premises with a big stick, what exactly does he expect legitimate OA publishers to do about it? Secondly, is he aware that most of the push for OA scientific publishing is coming from publicly funded research bodies and researchers themselves in an effort to increase scientific accessibility and transparency, not as an initiative dreamed up by the publishing industry as a cash-cow. And finally, while he bemoans that there is no body to represent the published, I would point out that a) there is absolutely nothing stopping researchers from organising themselves in such a manner if they felt so inclined, and b) that the editorial boards, i.e., the people who make the decisions and manage the policy of most journals, are in fact composed of active researchers and academics.

One might fairly describe the commercial publishers’ takeover and dominance of STEM journal publishing, from Robert Maxwell and his Pergamon Press on, as “an initiative dreamed up by the publishing industry as a cash-cow.” That is exactly the way Maxwell himself described his business. While initially challenged by OA, it seems to me that those same publishers are now realizing that Gold OA can indeed be another “cash-cow.”

Concerns have been raised about the use of Beall’s flawed lists to define the boundaries of what should or should not appear on academic CVs, a suggestion made by Dr. MS Cappell:
https://pubpeer.com/publications/BCD633B9ED1E8D276332197843B3F9

The key question here is should the Beall lists, which do not indicate the precise criteria for each entry (journal / publisher), nor the precise date on which the entry was made (i.e., lists full of imperfections) be used for official purposes?

I am not against Mr. Beall’s hobby, and it carries weight in that it serves as a useful “crude alert system”, but should never be used as an official blacklist, or white list, for any official purposes.

While predatory journals may be a threat to scientific integrity, what solution is there to the problems in HSS publishing revealed by Alan Sokal’s hoax?

It seems to me that real and rigorous peer review (or even editorial review) by people with actual baseline expertise in the topic addressed by the article would have been more than sufficient to prevent Sokal from getting his hoax placed in an HSS journal. Unfortunately, the folks at Social Text demonstrably failed to perform that most fundamental of scholarly-publishing functions.

Which brings us back to why it’s important to have people with a scientific background working in publishing. Reviewers don’t always get it right, and the more layers of safety netting the better, imo.

I’m late to this party, but perhaps I can suggest that the Sokal incident was an example of how academic publishing is supposed to work. Sokal fooled a couple of naive editors, but no one else who read his article was taken in — it was immediately recognized as either a hoax or as a ridiculous clump of idiotic rambling — and that’s how scholarship is supposed to be. If an author manages to get something questionable published, whether through naive editors, lazy peer-review, or Gold Open Access, the readers catch the error and the outcry starts. How is Sokal’s hoax different from, say, the infamous Social Science Research article by Regnerus?

Because you’re not an academic yourself, you may not realize or understand the amount of spam that researchers receive today. They are bombarded with spam emails from predatory publishers, many of whom are easily able to defeat spam filters. For those needing to eliminate questionable or low-quality journals or publishers from consideration, a blacklist has great value as a time-saving device. You can quickly check whether a journal’s publisher is on the list, and if it is, you can immediately remove it from consideration, saving valuable time.

Here’s a simpler approach, which I have used with 100% success:

When I get an email soliciting a submission to journal I’ve not heard of, I ignore it.

For that matter, the approach can be simplified yet further:

When I get an email soliciting a submission to journal I ignore it.

I have never understood, and will never understood, what on earth possesses any scholar — a person with research skills, we would hope — to choose a venue for work that they’ve spent months or years on by responding to an unsolicited email. That is not the behaviour of a competent researcher.

Exactly! I can’t begin to wrap my head around the notion of publishing a paper in a journal that I’ve never before read. If you’re a researcher in field X, you know the journals in that field, and you know what you read. Even if you can’t get your work into one of those journals, there are plenty of broad, reputable megajournals that you likely read papers from on occasion.

This keeps making me think that “predatory” journals are more about being the journal of last resort for work that can’t get published somewhere reputable than they are about deception.

I truly wish it were this easy to select journals. In my experience, it can be very difficult to know all the journals “in that field”. For one, the journal landscape changes, with new journals starting, established ones “rebooting” or changing direction/focus. Depending on the field, a list of journals publishing in one aspect of the work, say, results of qualitative research, might be quite different to those publishing, say, methods papers. If your work is even remotely trans-disciplinary, the “field” might cover multiple journal lists. My PhD research covers three separate established bodies of literature, and more if I add health service delivery or health policy. I’ve got no idea how authors keep tabs on not only the journals in their field/s, but which ones are “last resort” versus just lower quality or smaller than the biggest ones.

You don’t need to know all the good journals in your field. You just need to know enough that you have a selection that you’re confident submitting to. If you don’t happen to pick up on a new entrant for a couple of years, no damage done.

The idea that because you can’t keep fully up to date with all the good journals in your field, you’d take a punt on one that you’ve never heard of is frankly baffling to me. I simply can’t imagine why a competent researcher would ever do this.

Keeping up with the literature is a part of the job of being a researcher. If you are doing this on even a mediocre level, you have a sense of what gets published where. You know which journals are valuable for you to read. Then, when it comes time to publish your own research, you should already have a hierarchy in mind of which journals would be most appropriate for your work. You want it to reach the most relevant audience and should know from your own reading activities where that venue is.

Given how much of your career advancement is going to come from your publications, why on earth would anyone take a flier on a journal that they’ve never heard of and never read?

This discussion is surprising; there seem to be many claims about what active researchers should know, and assumptions about how it translates to journal selection. I haven’t found much evidence – in either the peer-reviewed literature or by publishers themselves – showing this is how things work in practice. I recently discovered Knight & Steinbach’s paper, where they identified 39 (!) considerations, across the broad categories of “likelihood of timely acceptance; potential impact of the manuscript (journal credibility, prestige, visibility); and philosophical and ethical issues” (http://www.ijds.org/Volume3/IJDSv3p059-079Knight84.pdf). Guidelines from major publishers may not be as detailed, but still suggest there is a lot of subjectivity and no clear rules. Selecting a journal is more art than science, with a bit of luck thrown in, and gets harder if paper is rejected by the first 4-5 preferred journals for whatever reason.

In practice, telling authors “don’t submit to a journal you’ve never heard of before” is simplistic and unhelpful. I gave but one example of why I, as an active researcher, may not have heard of many perfectly legitimate journals that might be suitable for a particular piece of work. There are no doubt others. Suggesting authors ignore fliers is a frankly bizarre coming from a publisher, since flyers are often part of publishers’ marketing strategies for new journals (aka “a journal you’ve never heard of before”).

Equally, it’s not helpful to tell authors the opposite, “only submit to journals you’ve heard of before”. Recommendation from a colleague or supervisor doesn’t mean the journal is legitimate or “good” (whatever that means), or that it is suitable for a particular paper. Haven’t we seen cases of multiple papers from departments or institutions end up in dodgy journals, precisely because the journal has been recommended by a (trusted) colleague or supervisor?

See also:
http://figshare.com/articles/Author_Insights_2015_survey/1425362
Top reasons for journal choice are “relevance to my discipline” and “reputation of the journal”. Both are difficult to ascertain for a journal you’ve never read.

Here’s perhaps an easier guideline for those who don’t read journals, or who are in the circumstances you note: If you’re going to submit to a journal, take a few hours to read what that journal publishes. At the very least, this should give you a sense of where you’re staking your reputation. Is that too much to ask?

Also, perhaps a misunderstanding of the phrase “taking a flier”:
https://en.wiktionary.org/wiki/take_a_flyer
(idiomatic) To make a choice with an uncertain outcome; to take a chance.

Oh- I totally misunderstood the last comment! Though by that definition, submitting to *any* journal could be considered “taking a flier”…

Thanks for the survey link, I’ll look at those results. We did a survey of our community health staff, looking at the research culture and capacity; it’s just been accepted:
http://www.publish.csiro.au/view/journals/dsp_journals_pip_abstract_Scholar1.cfm?nid=261&pip=PY15131

We were truly shocked at how low the levels of research skill were, both individually and across teams. Unfortunately our funding ended and we couldn’t do follow up studies on how best to develop research skills, or which skills were a priority.

We also wrote about the publication output from our mentoring program: http://www.publish.csiro.au/paper/PY14152.htm
Unfortunately we didn’t document how the authors selected journals – it would’ve been fascinating.

The survey is interesting, but I wonder how the intended behaviours compare with actual decision making during a publishing cycle? The recent Nature article was an eye-opener for me in that regard: http://www.nature.com/news/does-it-take-too-long-to-publish-research-1.19320

One of the papers they track was first submitted to a journal with IF 13, and ended up in one with IF 5. In my field, we’re lucky for the first journals to have IFs above 1, so we’re talking about quite different levels of perceived quality and impact.

It may not be the behavior of a competent researcher who is looking for reputable venues in which to publish, but it may well be the behavior of a researcher who is in need of more peer-reviewed articles for his or her CV and who cares more about getting those additional lines on the CV than about making a genuine contribution to the scholarly literature. My suspicion is that this kind of business is what tends to keep many of these scam journals going — particularly in academic cultures that emphasize quantitative evaluation of tenure candidates — and that actually fooling authors represents a relatively small part of their business.

How does a researcher/author know which are the “reputable venues”? And more importantly, how can they know before submission? Even the dodgiest “predatory” journals claim to offer peer-review on their websites. In my experience, once you cross the biggest/most prestigious journals in your field off the list, it starts getting pretty tough to determine what is “reputable”. It’s largely subjective, and possibly can’t even be determined until after an article has gone through peer-review (or not).

Use the journals that peers who you respect have used. If you’re thinking about using a given journal, but you can’t think of any good papers that your colleagues have published there, then that is a big red flag. The journal might be OK — maybe it’s very new, for example — but if in doubt, why even entertain it? Use a journal that you know is good, because you’ve read recent good papers in it.

How does a researcher/author know which are the “reputable venues”? And more importantly, how can they know before submission?

By reading the literature. If it’s a journal you don’t read, or one that publishes horrible nonsense, you should probably avoid it. If it’s a journal with a track record of publishing good work that is relevant to the audience you want to reach, then you will probably have a sense of that.

I am very “shocked” when I read “good journal”, “good paper”, “good search”..etc.!
What in hell “good” means? What on earth “high quality” means?
You can publish a piece of sh*t in the most “reputable” journal if you were “known”, but this does not make it a worthwhile paper in anyway. In contrast, you can publish a “good” paper (to use your term) in any unknown journal and it will remain “good”!

Can you explain why you find this “shocking”? Reading and understanding the literature is an important part of being a researcher.

When one reads a research paper, one forms an opinion of it. Was the work done well? Are the results accurate? Does the data support the claims made by the paper? Were the proper controls used? Are the findings interesting and of value?

Such subjective and qualitative opinions are up to the individual. As one reads the literature, one gets a sense of the reputation and track record for each journal, as well as for each researcher. It is then up to each researcher to publish their own work in a venue that lives up to their own standards for quality, and to each reader to make their own judgements about that paper.

All right, David, you’re freaking me out now. We have to stop agreeing all the time!

“Such subjective and qualitative opinions are up to the individual… and to each reader to make their own judgments about that paper”.
This is what I mean. “Good” or bad are subjective adjectives. It is all on personal appreciation. You might find a paper of “good quality” but others may do not find it so, and inversely.
Is for example studding or working on tobacco is a “good research” with all the “harms” tobacco cause for health?
Similar questions could be posed for other species or instances.
No “good” for ever, no bad for ever, all relative!

This is what I mean. “Good” or bad are subjective adjectives. It is all on personal appreciation. You might find a paper of “good quality” but others may do not find it so, and inversely.

And making those sorts of calls are an essential part of the job of being a researcher. And then others will make that call about your work. If the hiring committee for the job you’re applying to or the study section for the grant you’re trying to land feels differently than you do about the quality of your work, then they will act accordingly.

The other parts of your comment veer off into questions of morality and good/evil rather than assessments of research quality, which are irrelevant for this particular discussion.

I think it’s problematic to assume authors are also active researchers. For context, I work with so-called “practitioner researchers” in allied health and nursing. These potential authors are highly-skilled practitioners, investigating and reporting on aspects of their own clinical practice and service delivery. Research makes up a tiny proportion of their role, sometimes less than 10% in practice. Their research work is valuable, they gain unique insights into populations and service issues that laboratory studies cannot. They are usually familiar with the largest journals in their fields, but these are rarely suitable for their publications. So the question is how do I help them to develop skills in assessing journal quality? The vague answers here amount to “it comes with experience”, and I agree that it does. But authors need to make decisions without that experience, and the question remains: how do we help them do that?

But authors need to make decisions without that experience, and the question remains: how do we help them do that?

There are a lot of resources that can help.
Think Check Submit comes immediately to mind:
http://scholarlykitchen.sspnet.org/2015/10/01/think-check-submit-how-to-have-trust-in-your-publisher/
Other tools such as Medline, Pubmed, Pubmed Central, Web of Knowledge/Science, Scopus, tons of abstracting and indexing services, the DOAJ whitelist, and yes, even Beall’s blacklist all do some level of vetting of journals and can be helpful tools in making these sorts of decisions.

Thanks – I’m aware of many publisher resources, and generally direct practitioners to them. I hadn’t seen the Think Check Submit work and it looks excellent ! Once I leave my thesis cave, I’ll check it out properly.

Just to mention a few more, the article in compares several resources for evaluating journal trustworthiness, including QOAM.eu, JournalReviewer.org, SciRev.sc, Journalysis.org, JournalGuide.com, and PRE-val.com.

Really? We all acknowledge the journal-filtering is imperfect, journal-rank is at best imprecise, and so on — but that doesn’t mean we can’t tell the difference between a legitimate piece of research (albeit possibly a flawed one) and something that just isn’t research. When I started out in palaeo, it didn’t take me long to realise from all sorts of cues that the Journal of Vertebrate Paleontology was a good, solid journal. That’s not based on metrics (IIRC its impact factor was about 1.0) but based on what was published in it, how people talked about it, how much its papers got cited in new work or discussed in conference talks, and so on. There are myriad cues. Becoming sensitive to these is part of what it means to train as a researcher.

Defining what is and isn’t “research” can be very subjective too. In some health services there’s considerable debate about whether “evaluation” and “quality improvement” projects involving patient data are “research”. Some academics argue that double-blind RCTs are the only legitimate form of medical “research”, while others take a much broader perspective.

I would think that would make it even more crucial to know your own field and know the publications of that field. If you’re sending your work to a journal that doesn’t consider that sort of research valid (or at least out of scope), you’re wasting your time and the journal’s time.

Yes, even the mediocre wants to get published, so predatory journals are responding to a real demand. One way genuine publishers seem to be responding is with cascading peer review and a megajournal eating at the bottom. I guess it works to put predators out of business.

And then there’s this:

Predatory and low-quality journals are granting the imprimatur of science to basically any idea for which the author is willing to write an article and pay the author fees. This is polluting the scientific record with junk science, and demarcation has essentially failed.

But as we all know, it was Science that granted the imprimatur of science to Arsenic Life, Nature that granted the imprimatur of science to STAP, and going back further, the Lancet that granted the imprimatur of science to the fabricated and spurious MMR-autism link.

This isn’t just cheap point-scoring. There is a real issue here, which is that the seductive notion of a binary distinction between “proper science” and “junk science”, granted by reliable peer-reviewers, is mostly a comforting fiction. For those who doubt this (I did), I recommend this article by Richard Smith, previously editor of BMJ: Classical peer review: an empty gun.

It’s not a harmless delusion, either. Wakefield’s MMR-autism link was given so much credence precisely because it was published in the Lancet.

I don’t know what to do with these observations. I do feel that submitting a paper to peer-review is at least a declaration of serious intent. But the nice, clean, binary distinction between good and fake bad that Beall and others seek is a mirage; and the idea that we can make this distinction based on publication venue, doubly so.

Even if successful peer review means only a demonstration of serious intent from authors and editors and publishers, this is already much better than disingenuous peer review, as it happens in the worst of predatory journals.

If only genuine publishers — traditional ones as well as newer ones, like PLOS — were a bit more transparent about their peer review activities, we wouldn’t have to rely on indirect measures of integrity such as brand prestige.

Asking authors to just trust publishers’ peer reviewing oversight is leaving a back door for charlatans. As an author, I wish I could “trust, but verify.”

Agreed that, even if the value of the actual reviews is zero, there is still value in the process of going through peer review. It’s a sign of serious intent.

On transparency of reviews: some journals are now offering authors the option to publish the peer-review history alongside the articles — and option that I always take. Here are two examples (both of my own papers):
https://peerj.com/articles/36/reviews/
https://peerj.com/articles/712/reviews/

BTW., PeerJ report that 80% of authors are electing to publish the reviews, which I found a surprisingly and encouragingly high proportion.

“[T]here are [no organizations] that represent the interests of scholarly authors. . . There is no ‘consumers union’ for scholarly authors, yet they are, collectively, chiefly the ones paying for scholarly publishing-related services.

I’ve tried to help with this by advising authors on which journals and publishers they should avoid, but more work in this area is needed. An organization may be needed.”

A couple of points re: Beall’s above comments:

1) I’m curious about the existence of data that reflects who is actually paying article processing charges (US authors, in particular). Are scholarly authors truly the predominant payer? To what degree are funders (through grant monies allocated for this purpose) and institutions, specifically libraries with OA subvention funds, footing the bill? I would like to see evidence.

2) I find Mr. Beall’s lists to be helpful for the most part. He has been transparent with the criteria he uses to vet OA journals, whether they are agreed with or not. He is absolutely right that an organization(s) is needed. But I have yet to see a group of academic librarians or scholars step forward to assist with or collectively organize to take on the onerous and, to a large degree, thankless task he has undertaken.

Sure, there could be improvements to his lists. For example, I wish there were a direct link from a list entry to a form built on the vetting criteria that succinctly addresses why a publisher is on the list (similar to OASPA’s member form). Linking directly to the questionable publisher’s website is almost like free advertising, IMHO.

Like everything else, adding this type of information takes loads of time–volunteers to complete and upload the forms, update the information when needed, and de-list a publisher that gets its act together. It also takes someone who has time to coordinate these volunteers and provide quality control, among a host of other tasks. DOAJ has a hard enough time getting volunteers and most DOAJ users appreciate their efforts, to boot.

As a reader of his blog, I do suggest that Mr. Beall temper his posts, which are entertaining at best and vitriolic at worst. Maybe an occasional post about a new OA journal that got it right would be a welcome addition? Regardless, I will keep reading his blog in an attempt to be aware of the rising tide of questionable OA publishers. I don’t use his lists as black lists, but you can bet I follow up with more research on a publisher’s claims.

With regard to the criteria used by Mr. Beall, there’s often no substantiation for why a given journal meets a particular criteria. To show what I mean (and substantiate my own claim), I’ll point readers to Walt Crawford’s post, which could find a reason for 90% of the journals on the list http://walt.lishost.org/2016/01/trust-me-the-apparent-case-for-90-of-bealls-list-additions/

He told me that the evidence supporting the addition of Frontiers consists of “several emails from scientists”.

With regard to the “corrupting scholarly communication” claim, I bet if someone looked at all the stuff “published” by unquestionably predatory outfits, there would be very few citations pointing to it. In other words, is anyone citing the crap from a non-crap source?

That should read “could NOT find a reason” for 90% of the listings.

I think the concern is more the communication between the research community and the general public. Most reasonable scientists can tell a crap study, but that certainly isn’t true of the lay public and in particular, what passes for a science press these days. The Bohannon chocolate paper provides a clear example of that.

I can see how predatory journals have the potential to create a crisis of trust in communicating science to the public. But we should stop insisting that scholarly journals serve well for both academics and journalists. The chocolate hoax demonstrates that most reporters don’t have the proper training to tell good from bad science. Relying on the journal reputation alone is a poor proxy. There needs to be an intermediary-level publication type, in between journals on the one hand and newspapers and magazines on the other hand, to sort the wheat from the chaff. News & Views journal sections would seem to help, but they actually blur the line further in my opinion — ideally, these non-peer-reviewed materials should go in a supplement of the main journal content, with a separate ISSN and distinct title, so that they’re not cited as scientific material. Meanwhile, science by press release is quickly filling in the gap.

In general, the press (or what’s left of it) has the notion that if an article is published in a peer reviewed journal, it has some stamp of validity. This may have been more accurate in the past, but now in an era with many “journals” claiming to do peer review while not actually doing anything, that distinction may no longer be as useful a marker of validity.

While your intermediary-level publication sounds interesting, it seems unlikely to happen, as I’m not sure there’s any sort of market for such a publication nor anyone who would pay for it.

I was thinking of venues like The Physics arXiv Blog or MedPage Today, though more geared towards journalists rather than practitioners. I’m just saying the crisis of trust could be avoided without having to put access back behind pay-walls as Beall seems to reminisce.

What is ‘scientific material’? Manipulating a plant or mice is a scientific material?
The problem is in so-called researchers who carry much importance to futility thinking they are ‘scientist’ because they worked in a lab or because they manipulated a mouse.
Science is also the ability to analyze and to criticize, to point out weakness and bizarreries…

I should note that the 90% figure was for *additions* to the lists (changes from 2015 to 2016). A little while later, I did a check for *all* of the lists and *all* of hisblog–and, I’m sorry, Ms, Holland, but Beall fails to provide *any commentary whatsoever* for 87.5% of the publishers and journals on his lists. Details here (including the truly tiny number of DOAJ-listed journals from publishers where Beall’s made any case at all): http://walt.lishost.org/2016/01/trust-me-the-other-problem-with-87-of-bealls-lists/

Whatever else we might think about Jeffrey or his methods, he deserves credit for doing so much to highlight the phenomenon of predatory publishing.

Re: clarifying the ambiguity between Beall’s two statements about author-pays Gold OA, the apparent contradiction seems to disappear considering OA journals whose business models require no direct charge to authors. After all, Gold OA only implies that access is made open by the journal (in contrast to self-archiving, Green OA). Examples are sponsoring consortia such as the SCOAP3 initiative and cooperative publishing arrangements such as OHP.

I believe that Mr. Beall has made an extremely serious mistake with his latest inclusion, Japan-based Journal of Physical Therapy Science. In his latest blog entry, he refers to it as a “joke” and a “sham”:
https://scholarlyoa.com/2016/02/11/japanese-open-access-journal-is-a-joke/

He also claims not to “see” the web-site even though it is more than obvious on a Google search, with English, Japanese and Chinese web-sites:
http://spts.jpn.com/jpts/index.html

The society, which caters primarily for SE Asian countries, but which started to become more internationalized with its introduction onto J-STAGE, appears to have a long and respectable history, and the journal itself is in its 28th year of publication. I have immediately contacted OASPA and JPTS today to ask them for comment on Beall’s characterization and listing as a predatory OA journal.

If in fact Beall has made a mistake here, and if in fact JPTS appeals this listing, isn’t it time to begin taking action against Mr. Beall?

I rarely agree with you, but in this case you appear to be correct. I speak/read Japanese, and it was easy to find information on the society, that–if true–would suggest that the journal is not predatory. I did so within three minutes of reading Beall’s newest post.

One of Beall’s points is valid: the journal should list its editorial board. However, Beall also implies that it is suspicious that Japanese researchers do not publish there. This is almost certainly because the society also publishes a companion Japanese-language journal called 理学療法科学.

“Journal of Physical Therapy Science” was an irresponsible addition to the list.

No body knows the real motivations of Jeffrey Beall behind his questionable list, but there are some important signs.
Is he paid by traditional publishers? Is his librarian profession threaten by the open access models, so he tries to keep his own position alive?
What Jeffrey Beall would like authors to do?
Does he want them all to submit and to publish only in one or two journals or in two or three traditional publishers only, which start to abuse their positions?
There is a huge conflict of interest with Beall’s list.

Most readers of Beall’s blog are located in developing countries who aspire for artificial prestige and who do not know the hidden motivations of Jeffery Beall’s list.

The best remedy to Beall’s blog is just to ignore it.
It is a great chance that the publication industry has diversified to such extent to give authors multiple choice and not to be under the mercy of the true predatory publishers, the big ones, that Jeffrey Beall is unable to see.
Open Access is a good alternative, but it just needs to reduce the price.

Joe, one cannot ignore Beall, or his lists.

Just last weekend he was celebrated at and by UC Davis:
http://icis.ucdavis.edu/

The traffic to his site, which I believe he and Retraction Watch have a little agreement on to keep traffic high to both sites, through Retraction Watch’s Weekend Reads, indicates that ignoring cannot resolve this problem. Instead, a policy I have now advocated for COPE member journals, is rather, as scientists, to detect fundamental flaws in the logic, or the facts, or the scientific quality.

So, your proposal of “ignoring” is no longer a viable option. I in fact contacted the University of Colorado and spoke to the legal counsel there and his superiors, indicating that I felt that several of his entries were libelous and seriously damaging to the images and reputations of many scientist and possibly even valid journals and publishers. They literally brushed me off. That was about 2-3 years ago.

A new concern has arisen and in fact I copied the Scholarly Kitchen my concerns by email 2 days ago, an email addressed to Jeffrey Beall and to COPE’s Virginia Barbour, where I felt that 12, possibly 13, publishers common to both lists (i.e., Beall’s lists of “predatory” OA journals, and COPE’s paying member journals, were highly incompatible. I challenged both of them to please make available the precise criteria by which the same publishers were included on both lists. I will report back if I hear from one or both of them.

This is the list, which I will also post on Leonid Schneider’s blog.

List of publishers in common
Ashdin Publishing
Asian Network for Scientific Information (ANSINET)
Avicena (COPE) vs AVICENA publisher (Beall)
Frontiers
Genexcellence Publication
Global Researchers Journals
Hikari Ltd
InTech
Integrated Publishing Association
Jaypee Brothers Medical Publishers (COPE) vs Jaypee Journals (Beall)
Kowsar Corporation Company (COPE) vs Kowsar Publishing (Beall)
Smile Nation – Lets Smile Together

Unclear
Business Perspectives Publishing Company (COPE) vs Business Perspectives (Beall)

Gosh, I don’t agree with this. If you don’t like what Beall is doing, come up with a better iteration. These legal challenges are such a waste of everybody’s time.

I have found Beall’s list useful in the past, but it seems that he is now swinging wildly also at respectable journals such as the Journal of Medical Internet Research and the BMC and Frontiers stables. I understand that Beall needs to include journals that contain papers that are actually read and cited by serious researchers, because otherwise his list would fall into oblivion (as noted here: http://walt.lishost.org/2016/01/trust-me-the-other-problem-with-87-of-bealls-lists/). But including respectable publishers (such as Frontiers) is unfortunate because it damages the credibility of both the publisher and the black list.

In the interview I read: **For example, the Bohannon sting in Science two years ago found that 45% of a sample of publishers included in DOAJ accepted a bogus paper submitted for publication. I know that DOAJ has tried to make improvements, but in fact, in my opinion, it’s never really recovered from this telling, major failure**.
And further along it says:** I don’t think DOAJ made any decisions or changed their policies based on anything I said or did. I think they tightened up their inclusion criteria as a result of the Bohannon sting**
I can tell you that DOAJ did not tighten critetia as a result of the Bohannon sting. We had been working a long time already at defining a set of criteria to select for quality open access journals.The result of the sting was merely that we could immediately remove a number of bad apples from our directory. As to the Sting., from a scientist point of view this was a badly conceived investigation. Only open access journals were targeted , and then a selection, not a (random) sample, of DOAJ listed journals. There is no way thatthefig of 45% of a DOAJ sample of publishers can be be related to 45% of DOAJ’s content. But this is what is being implied. What I would call bad science! Also the result of the Sting was that specifically open access journals were suggested to be correlated with bad quality science. Had the experiment been done including a sample of toll access journals, this suggestion could obviously not have worked.
I do recognize that it is difficult to maintain a whitelist but I am confident that we will eventually succeed in weeding out virtually all of the questionable open access publishers in our Index.
And by questionable I mean questionable publishing practices. As for the quality of the science in open access journals I would say we have to compare this with toll access journals.
After the publication of John Ioannidis’article ‘Why most published reearch findings are falso’I tend to think that the issue is not one of the science being published in open access. Which is why we refrain from judging journals on the basis of the quality of the science, but rather evaluate the quality of the publishing process
Tom Olijhoek, Editor-in-Chief DOAJ

Bohannon’s sting has been hashed and re-hashed and I don’t want to go down that road again here (see http://scholarlykitchen.sspnet.org/2014/02/28/the-scam-the-sting-and-the-reaction-labbe-bohannon-sokal/ and http://scholarlykitchen.sspnet.org/2013/11/12/post-open-access-sting-an-interview-with-john-bohannon/ ). But to be fair, if you’re not drawing any conclusions about subscription journals or comparing the performance of the two models, you do not need to include them as a control. That is a different experiment than what was performed here, which was targeted to ask one particular question. When one does an experiment in Drosophila and draws conclusions about fruit flies, one does not need to include a control that repeats the same experiment in zebrafish.

Also, as far as the Ioannidis article, please see
http://arxiv.org/abs/1301.3718

1) “Although I receive many requests to identify good or high-quality journals, I choose to leave this identification to others, especially those in the particular fields the journals represent.“.
I have done this, but I am not an expert. See what you think. The idea was to identify reputable journals in several fields that are both OA and free or with author payments up to $500. This is not just a distillation of DOAJ – for each on I visited the site, checked on the veracity of the submission and publishing process, and checked Scopus and WoS for entries there. If you follow Beall and believe all author fees are a bad thing, ignore the few on the list that require payment. But I doubt any of them are after a profit – most are issued by small scholarly societies and enthusiastic academics that we should be supporting. https://simonbatterbury.wordpress.com/2015/10/25/list-of-open-access-journals/

2) …and on what to do… “Easy: we need to end the system of payments from authors. Author-financed scholarly publishing is corrupting scholarly communication.”.
For several journals on my list, all the fees are doing are covering basic costs, not making a profit at all. I object to the large fees of the big 5 publishers, usually charging $3000 to make an article OA in a firewalled journal. Yes JB, they have a huge amount to answer for. Fortunately there is little uptake of this option in the disciplines I work in.
I think we should also be supporting efforts by university libraries, scholarly societies, and academics to change the course of academic publishing by doing things themselves.

3) “As Gold OA does not involve the curatorial activity of a library, what changes has the advent of OA brought about in a library’s operations?”
There is no mention in JB’s reply to this question of the growing role of academic librarians in the US and elsewhere in curating open access (and usually free) scholarly journals This is a vital aspect of the profession today. Offering students and faculty material in an OA journal of repute, hosted by your own library, can save scarce journal subscription costs elsewhere and disseminate knowledge. Most public Unis in the US now have a journal hosting service.

4) lastly, for those of you who have not seen it, see Beall’s article published in Triple C in 2013, here http://triplec.at/index.php/tripleC/article/view/525/514 . The arguments he makes on this blog are replicated in that article at greater length ( I do not know whether he had tongue in cheek, publishing in that OA journal ). I can safely say, as the editor of an OA journal since 2003, that his claims that OA advocates are Euro-‘collectivists’ opposed to publishing corporations, as pretty laughable. Many such advocates are actually found in the US public university system, and this includes many librarians I have met over the years – they in particular are crippled by the prices charged by the big five.

Comments are closed.