[N.B.: As of 2020, this product is now called Predatory Reports.]

Let’s start with a disclosure and a disclaimer:

Disclosure: about a year and a half ago, when Cabell’s was in the early stages of planning for the creation of this product, I did a few hours of paid consulting work for them and later contributed to a Cabell’s-organized conference panel discussion on the topic of predatory publishing. I’ve had no further involvement in the project, and I have no ongoing financial relationship with Cabell’s and no financial interest in the company.

discard blacklist

Disclaimer: I’ve been writing about the problem of deceptive/predatory publishing, and arguing with those who minimize or dismiss its significance, for several years now, including here in the Kitchen (for example, here and here). I’m not going to relitigate those arguments in this posting; gluttons for punishment are welcome to read my earlier posts and the lovely commenting threads that ensued. The present post is written from the assumption that deceptive/predatory publishing is a genuine problem, and that the effort to pay attention to it is a worthwhile one.

And now, on to the review. But first, a little background for those new to the topic.

What Is Predatory Publishing?

There currently exists a thriving black-market economy of publishing scams — typically referred to as “predatory journals” — that are designed to look like genuine scholarly publishing programs. In most cases, these scams take the form of offering aspiring authors publication in “journals” that are created not to publish rigorously vetted science and scholarship, but rather to publish whatever the author submits in return for the payment of an article processing charge (APC). The author is either duped into believing that his work has been accepted by a legitimate scholarly journal, or (more likely) willingly takes advantage of guaranteed publication in a scam publication that hides behind a pretense of scholarly rigor, in the hope that his complicity in the fraud won’t be detected by his colleagues. Just as the customer of a diploma mill willingly accepts a fake PhD in return for payment (hoping that no one will notice that the “PhD” he claims is from a sham institution), so the customer of a predatory journal places his article in a fake journal with a real-sounding name, thus beefing up his CV while hoping that no one will investigate the publishing venue(s) of his paper(s).

The purpose of a predatory-journal blacklist is to identify and call attention to such scam operations, in order to make it harder for them to fool authors and harder for unscrupulous authors to deceive their colleagues by publishing in them.

How Does a Blacklist Work?

The concept is simple: the list manager seeks out information about journals that are engaging in deceptive and fraudulent practices, and then publicly exposes them. Among the most egregious of such practices are:

  • Falsely claiming to provide peer review and meaningful editorial oversight of submissions
  • Lying about affiliations with prestigious scholarly/scientific organizations
  • Claiming affiliation with a non-existent organization
  • Naming reputable scholars to editorial boards without their permission (and refusing to remove them)
  • Falsely claiming to have a high Journal Impact Factor
  • Hiding information about APCs until after the author has completed submission
  • Falsely claiming to be included in prestigious indexes

Unfortunately, while calling out fraud is relatively simple in concept, it is complicated in practice by a number of factors — among them, the fact that what appears to be fraud may, in some cases, actually only be incompetence or inexperience. This means that anyone who undertakes to create and maintain a blacklist has to be very careful to discriminate between marketplace actions undertaken in genuine bad faith and those that simply reflect incapacity or ignorance. (Hold onto that thought, because we’ll be coming back to it.)

Are There Any Blacklists?

Up until this year, there was really only one functioning blacklist: the (in)famous Beall’s List, which was the product of Jeffrey Beall, a librarian at the University of Colorado, Denver. (It was shut down, apparently under fire, in early 2017, but its last iteration has been archived.) Beall’s List was brave and valuable, but significantly flawed in ways that have been amply documented elsewhere. Even with its flaws, however, its shutdown left a significant hole in the scholarly communications ecosystem.

Into that gap has stepped Cabell’s International, the well-regarded publisher of a longstanding journal directory. Whereas Beall (a single individual with a full-time day job) had struggled to manage his blacklist consistently and transparently, Cabell’s is in a position to assign dedicated staff to the maintenance of its list, and therefore to make it more robust, consistent, and careful. Cabell’s has also made its blacklist a commercial product, which means that it should generate its own revenue stream. Since Beall is also an avowed enemy of the open access (OA) movement, and since the journals that engage in this kind of predation are almost invariably provided on an OA basis, there was good reason to question his objectivity. Cabell’s seems to have no such agenda.

The Cabell’s Blacklist: A Big Step in the Right Direction…

The Cabell’s Blacklist product was introduced at the 2017 SSP meeting just a couple of months ago, and access is now available for purchase. Pricing is set institutionally, and thus requires a custom quote. I requested and was given temporary access to the list for the purposes of this review.

My overall assessment of the Cabell’s Blacklist is that it is a welcome development, and that it still needs quite a bit of work. To begin with the positives:

  • The criteria for inclusion in the blacklist are clearly set out and publicly available.
  • For each entry, date of last review is indicated, and an email hyperlink is provided that allows readers to contribute information about a journal.
  • Each entry includes a link to Cabell’s appeal policy. Appeals are allowed once per journal per year, and instructions are included in the policy text.
  • Wisely, ratings are given at the journal level, not the publisher level; thus, for example, the Open Science journal Advances in Biomedical Sciences is listed as having 5 violations of Cabell’s criteria, while the same publisher’s International Journal of Public Health Research has 6.
  • For each entry, specifics of the violations are conveniently listed under criterion categories: thus, Acta Rheumatologica is dinged for violations in the categories of “Integrity” ( “The publisher hides or obscures relationships with for-profit partner companies”), “Website” (“Does not identify a physical address for the publisher or gives a fake address”) and “Business Practices” (“Emails from journals received by researchers who are clearly not in the field the journal covers”).

…That Still Needs Some Work

So what are the problems? The most serious is that, as currently configured, Cabell’s Blacklist perpetuates the common problem of conflating low-quality journal publishing with deceptive or predatory publishing. In this case, the conflation happens because many of the blacklisting criteria Cabell’s applies are really quality criteria (“poor grammar and/or spelling,” “does not have a clearly stated peer review policy,” “no policy for digital preservation,” etc.) that can easily end up gathering fundamentally honest but less-competently-run journals into the same net as those journals that are actively trying to perpetrate a scam. Predatory and incompetent journals do often evince some of the same traits, but these traits don’t always indicate predatory intent. (However, the Cabell’s staff assures me that there is a behind-the-scenes scoring rubric that assigns different weights to different violations, and is designed to prevent merely new or low-quality journals from being tagged as predators and included in the blacklist.)

Among the less serious concerns:

  • Some of the criteria are a bit unclear. For example, under the category of “Integrity,” one of the criteria is “Insufficient resources to preventing and eliminating author misconduct that results in repeated cases of plagiarism, self-plagiarism, image manipulation, etc.”
  • A link is provided that, when clicked, downloads to the user’s computer a spreadsheet listing “journals under review for the blacklist.” This is good, but the list provides no information about why the listed journals are under suspicion.
  • Other alleged violations seem difficult to prove, and in fact appear largely to be inferential – for example, “Operate in a Western country chiefly for the purpose of functioning as a vanity press for scholars in a developing country.”
  • In at least one case, a more serious violation was missed while less-serious ones were listed. In the course of my research for this review, I contacted a member of the editorial board of the OMICS title Journal of Bioequivalence & Biolavailability. This board member reported having resigned from the board months ago, but said that OMICS “never took me off the list.” Cabell’s dings this journal for other offenses (including claiming affiliation with the apparently nonexistent Association of Contract Research Organizations), but didn’t catch this very serious one. Of course, no blacklist will be perfect — but this particular criterion seems much more important, and much more indicative of predatory intent, than, say, “No policies for digital preservation.”

It’s worth noting that on the scale of predatory or deceptive practices, many of these violations of scholarly-communication norms are, while troubling and perhaps annoying, not especially egregious. This is precisely why a blacklist needs to be transparent about the reasons for a journal or publisher’s inclusion — so that the reader can decide for him- or herself how worrisome the journal’s behavior really is. This transparency is one of the most important positive aspects of the Cabell’s product.

On a functional level, Cabell’s Blacklist has a few glitches as well, which should be fixable. A couple are actual coding problems: first, as of this writing it’s not possible to log in using the Safari browser. (Cabell’s reports that a fix is in process.) Second: at several points in my exploration of this product, the Advanced Search function simply stopped working — when I entered a search criterion and hit <search>, the search window disappeared and I was taken back to the Blacklist home page. When I attempted a simple search from the home page, or returned to Advanced Search to try again, I got the same results. Only by logging out of the database entirely and logging back in was I able to recommence searching. Again, this is clearly some kind of coding problem that should be relatively easy to fix.

Three Suggestions for Improvement

First: it might be wise for Cabell’s to consider “tiering” its violation categories: some of these criteria raise suspicion but are more suggestive than indicative of predatory intent, while others are more smoking-gun. Distinguishing between suggestive and dispositive criteria would make Cabell’s Blacklist more useful.

Second: the advanced search function needs to be much more advanced. Right now it allows the user to search by publisher, journal title, country, or ISSN, but I would have liked to search by criteria — how many journals make false claims about their impact factors, for example? (In the Advanced Search window, one also has the option of toggling “Open Access” on or off. I’m not entirely sure what this does; I tried several searches twice, once in each setting, and never got different results based on the position of the “Open Access” toggle.)

Third: Cabell’s should put a very high priority on repairing the coding glitches that currently make it unnecessarily difficult to use the blacklist. It doesn’t matter how good the content is if you can’t access it consistently.

Overall, Cabell’s Blacklist is a welcome new product, but one that will need significant improvement before it is able to realize its full potential as a tool for helping maintaining the integrity of the scholarly-communication ecosystem.

Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.


60 Thoughts on "Cabell’s New Predatory Journal Blacklist: A Review"

Rick, thanks for your excellent and thorough review. While it is clear that Cabell’s is putting time, people, and resources into this resource I’m wondering who you think will subscribe to their resource? Is this a tool that is aimed at potential authors or administrators? Is it designed for the US/Canadian/European market, or for developing economies where deceptive publishing is a bigger problem?

Thanks, Phil. You ask a very good question. I think the most likely target market for this product is academic libraries, which would purchase it like any other database and then market it internally to academic departments, encouraging faculty to use it as a resource when evaluating (for example) applicants for tenure or promotion, applicants for hire into faculty positions, candidates for interim review, etc.

I anticipate a challenge Cabell’s will face, however: in my experience, librarians are very often (though certainly not always) hesitant to acknowledge that predatory publishing is a serious problem. Many of us see the discussion of predatory publishing as part of a conspiracy to cast aspersion on open access. To the degree that this really is a significantly broad mindset in the profession, Cabell’s will face a marketing challenge.

Of course, there are other possible marketing targets: grant-making agencies, offices of research, interested individuals, etc.

I asked Cabell for an individual subscription and alas, never got a reply.
Your note at the end that individual subscriptions (may?) be possible emboldens
me to try again.

Thank you for an excellent review, Rick. Your lucid discussions in this area
always make my day. Your note about possible librarian bias rings true, but hopefully
Cabell can argue that this tool optimizes faculty publication practices in an ethically
problematic environment. Time will tell.

I respectfully disagree that “conflating low-quality journal publishing with deceptive or predatory publishing” is a problem. From the author’s or reader’s perspectives ’tis an entirely moot point whether a journal is rubbish due to incompetence or due to dishonesty, either way we want nothing to do with it…
As to who would buy this product, perhaps the most likely customers should be institutions or agencies in countries that suffer from a high incidence of academics publishing in such journals as vehicles for promotion?

Thanks, Mike. I think you’re probably right about one of the possible customer bases for this product.

As for your first point: I agree with you that a crap article is a crap article, and its lack of quality is what ultimately matters — but the existence of fraudulent journals is a different (though not entirely unrelated) problem, and I think it also matters. There are many of us within the scholarly-communication ecosystem for whom it makes a very big difference whether a journal is poor because its administrators are inexperienced, or because its administrators are actively perpetrating fraud.

I would be interested in learning what institutions/libraries have purchased the list.

Great review and clearly a good resource in the making. One phenomenon of scam publishing which is difficult to track until happens, tho’, is ‘journal identity theft’, which, in a recent case of which I’m aware, even involved the setting up of a fraudulent website in the name of a reputable learned society in order to take fees for ensuring acceptance in their journal. There was no deceptive publishing here – just a conduit for the theft of money from authors desperate to get accepted in the legitimate publication…

Another scam that I haven’t seen widely mentioned:
There was one predatory publisher that put up a website showing the covers and the descriptions of a set of our journals, announcing that they had set up an “agreement” with us, and that authors could now submit their papers to our journals through them. Presumably they would accept the papers and take payment for them and the author would then be left high and dry. A cease-and-desist from our lawyers made the site go away, but I suspect they just moved on to a different set of journals from a different publisher.

Twice we have had authors contact us about when their paper would publish in the journal. We had no record of the paper and in both cases, the author had an acceptance letter that did not come from us but have one of our journals and the editor in chief names on them. I have no idea what these people paid for this service but clearly this is a widespread problem.

This is a great example of the type of scam Cabells Blacklist is trying to prevent. It’s also a great example of why input from the academic community is so important to this project. If anyone has information about this or other suspicious activity, please email us at blacklist@cabells.com.

Thanks Rick, interesting review! You make some good points, especially around the subjectivity of some of the criteria – I wonder how long the ‘controversial articles’ and ‘geographical/gender bias in editorial board’ criteria will stay on their official list of indicators?

There are two things that concern me about this project – firstly I’m a little sceptical that they have enough qualified staff to monitor the tens of thousands of potentially ‘predatory’ journals out there. They currently have about 4,000 journals in the blacklist, which is probably only the tip of the iceberg. It’s only fair that they assess ALL 50,000+ academic journals worldwide, not just those that have been flagged so far by Beall. Good luck with that guys…

The second problems is… who is going to buy this product (to pay for the vast army of journal assessors)? Universities in the developed world should have no need for this type of database – their staff and students surely have access already to the resources and training needed to know how to identify a credible journal to publish in (and it’s in this Northern context I agree with Mike Taylor’s comments earlier this year “How can you expect to have a future as a researcher if your critical thinking skills are that lame?”) – that’s why only a tiny number of authors in US/Europe end up publishing in these journals. The vast majority of articles that are poached by ‘predatory’ journals are written by developing country authors, who want and need (from my experience) access to quality information and training on research dissemination, but their libraries are unlikely to be able to afford products like this.

Good thoughts, Andy, thanks.

In response to your first observation: I think you’re probably right that it’s not possible for any organization to comprehensively survey the entire journal landscape and catch all the predators. But I don’t think this is a binary thing, where it either has to be done perfectly and with absolute comprehensiveness, or it shouldn’t be done at all. I think there is definitely a place for a company like Cabell’s to do what it can — and perhaps for volunteers to help.

To your second concern, I would note that it focuses on the problem of authors being tricked into publishing unwittingly in predatory journals. As I (briefly) mentioned in the review, I think this is the smaller of the two major problems posed by predatory journals. Much bigger is the problem of authors knowingly and deliberately publishing in predatory journals, in order to fraudulently beef up their CVs. This is a particular danger in academic cultures that provide strong and highly quantitative incentives to publish large numbers of peer-reviewed articles in high-impact journals.

Absolutely, agreed. Although an imperfect comparison: There are review sources like PW and LJ that attempt to review major books. Some titles might come from author subsidy presses. Some of these might be considered predatory, others not. It is tough to argue that book reviews should not be published because there are too many books. On the contrary, massive quantities may call for better and more useful review sources.

Hi Rick, I think that’s a fair argument on the first point – I’ve found that in discussions with developing country researchers, the same big players (I won’t name names) crop up again and again, so any initiative that spreads awareness of those guys could neutralise a large proportion of the market. However, I think there is an increasingly ‘long tail’ as more and more people realise how easy it is to make money from young researchers.

With regards to the second point on the intention of authors, I am inclined to disagree – I think more authors submit unknowingly to predatory/deceptive journals than those who are doing so knowingly and deliberately (and perhaps this binary either – there are probably many who are just being lazy, or don’t realise the importance of the publishing outlet, and the role of the peer review process). However, this is based mostly on anecdotal evidence – it would be great if somebody could do some research on this… if they haven’t done so already.

I agree, Andy. In my limited experience, it’s not malice or avarice (how could it be? These young physicians are being asked to pony up money, sometimes several thousands of dollars) that leads residents to pick publications poorly. It’s ignorance and lack of guidance by their faculty. In the faculty’s defense, medical and science publishing is so huge and such a free-for-all that no one can keep a handle on it.

One thought on potential customers for the blacklist would be funding groups like the UGC, who have taken time and effort to create their own whitelist of journals that they “count” toward career advancement/funding:
This sort of resource would be helpful in monitoring their list. It would also be useful for anyone running tenure/funding/hiring committees to have as a resource to monitor the applications of potential hires/tenure candidates.

But what’s more intriguing to me as a publisher is the potential tools that may come out of this. I’ve spoken to Cabell’s about a tool I could use once a paper is accepted to scan through it and make sure the authors haven’t cited any papers in predatory journals. If any are found, my editors could go back to the authors and either have them remove it or find a legitimate paper to make their point. This is a quality control check that many journals would likely pay for.

Without getting into the discussion of whether a blacklist is desirable, I would note that your link for “thriving black-market economy” is to a paper that vastly overstates the demonstrated problem, quite possibly by a full magnitude. I discuss this in Chapter 4 (pp. 28-30) of Gray OA 2012-2016: Open Access Journals Beyond DOAJ, http://citesandinsights.info/civ17i1.pdf

[It’s the January 2017 issue of Cites & Insights.]

I’m not saying there aren’t questionable journals (“predatory” is a tricky term), because there are, and I certainly believe Cabell’s approach is more plausible than Beall’s, but the magnitude of the problem is much smaller than the flawed study linked to indicates. (Yes, it was peer-reviewed–openly–but, notably, none of the reviewers made any attempt to critique the statistical/sampling methodology. Fact is, a non-random sampling of 6% o a wildly heterogeneous universe is unlikely to yield good results; thus my preference for 100% “sampling.”)

Thanks for these thoughts, Walt. Two follow-up questions for you:

1. For our readers, could you provide a tl;dr version of the conclusions from your paper? How much smaller do you believe the problem is than the paper I cited suggests it is?

2. How might someone examine a “100% ‘sample'” of the predatory/deceptive journals universe?

Rick: Your second question is excellent and unanswerable. I defined “gray OA” as “either on Beall’s lists or formerly in DOAJ but not now included.” I could, did, do a 100% survey of those journals and “journals” (that is, journal names with no real presence–the majority of them, nearly two-thirds.

As for the first: the chapter in question is only three 6″x9″ pages, but the key finding is that the journals in the list when studied, that actually had some stated evidence of problems, published around 30,000 articles in 2014, not the 420,000 estimated from the study (which looked at 613 journals from a universe that numbered nearly 19,000 when I looked at it).

I estimate that a realistic number for 2014 articles in questionable OA journals is roughly 120,000, which is larger than 30,000 but a whole lot smaller than 420,000. Of those, only about half–62,191–are in journals where Beall offered even minimal evidence (his universe more than doubled since the Shen/Bjork study); the largest other categories are either journals that hide or don’t state their article charges or those that struck me as being papermills, that is, having absurdly short review periods.

Hi, Walt —

The item to which you linked in your comment is a 65-page paper. Did you intend to link to something else?

Never mind, I think I get it now — you linked to the 65-page chapter, but were only referring to pages 28-30. Correct?

But this begs a different question, since I don’t particularly have a dog in your fight with Shen and Bjork: would you not characterize the predatory-publishing economy as a “thriving black market”? If not, how would you characterize it? I get the impression that you don’t think it’s a big problem, but I don’t think you’ve ever come out and said that in so many words. How big an issue do you think this is? Should we be giving it serious attention?

I don’t offer an opinion of how big the problem is for three reasons: 1. I don’t think there’s much agreement as to what “the problem” is; 2. I don’t have any way of knowing the overall size of the journal-article field, so am uneasy about saying whether 29,000 to 120,000 is a tiny or medium-sized portion of it (or whether that has any relevance to “the problem”); 3. I’m not a librarian, not an academic scholar, and not possessed of the self-assurance or arrogance (take your pick) to suggest that my opinion would matter much.

To my mind, there are two categories of truly predatory journals: 1. Journals that charge a fee but don’t state what that fee is; 2. Journals that successfully pretend to be other journals and thus get papers that should have gone to the other journals. I don’t believe either of those is a large portion of journal-article publishing. Beyond that…it’s complicated. I’ve tried to note what others are saying and to provide some real numbers; I am not and do not intend to be the OA serials whisperer. (And, it must be said once again, despite Beall’s blinders, if there are questionable journals that cause some form of damage, that most certainly includes some non-OA journals as well as some OA journals.)

I don’t offer an opinion of how big the problem is for three reasons: 1. I don’t think there’s much agreement as to what “the problem” is;

I don’t know, Walt, I think the problem is fairly well defined (though not everyone agrees how big it is). The problem is journals that lie about their practices in order to attract paying authors: journals that lie about their IFs, lie about their institutional affiliations, lie about their review practices, lie about their editorial-board memberships, lie about whether they publish selectively, etc. The existence of journals that do these things has been amply documented. I asked how big a problem you think these practices are because I assumed that you find them problematic. Was I mistaken?

2. I don’t have any way of knowing the overall size of the journal-article field, so am uneasy about saying whether 29,000 to 120,000 is a tiny or medium-sized portion of it (or whether that has any relevance to “the problem”);

Sorry, let me clarify my question: I’m not asking you to measure (or even estimate) how big a proportion of the journal-article universe predatory publications represent. I’m just asking for your opinion as to whether predatory publishing is a big problem. I’m asking for your subjective opinion here, recognizing that it will be to some degree informed by research that you’ve done on this topic.

3. I’m not a librarian, not an academic scholar, and not possessed of the self-assurance or arrogance (take your pick) to suggest that my opinion would matter much.

But you’ve written about predatory publishing quite a bit, Walt–for example, here, here, and here–so clearly you’re not averse to offering opinions on this topic (nor should you be). Why the reluctance to say whether you think predatory publishing is a significant problem?

OK, here goes: Yes, some publishers produce fake journals, lie to acquire authors & payments, do inadequate peer review, etc. Some of these are subscription publishers, some are OA, some really aren’t publishers at all.

Does this represent a problem? Sure it does.

Does it represent a major problem? Personally, I think it’s pretty small beans in the range of problems we face–even in the realm of scholarly communications. I suspect there are a lot more worthwhile-but-“minor” articles that don’t see the light of day because they’re adjudged insufficiently Big Deals than there are papers that cause harm because they should never have been published, and I think *unavailability* of research results [except to those at the proper institutions, etc.] damages society & research a whole lot more than availability of some badly-vetted papers does.

But that’s just me.

As a volunteer moderator of a free list for scholarly announcements for independent scholars around the world, I used Beall’s list to flag CFPs and conference announcements for further investigation before I sent them out to colleagues. The paywall for the Cabell’s blacklist will make it impossible for facilitators like me to use it.

Margaret, you raise a very important point. This is the two-edged sword of managing a blacklist: if it’s free, it’s much more useful to many more people — however, without a revenue stream it’s hard to manage it well. (“Free” is a price we all love, except when it’s the price offered for our labor!) The revenue stream makes the list better, but obviously limits its reach. It’s a genuinely tough problem.

Making something free often limits its reach and longevity. Making something paid often increases its reach and longevity, as active sales efforts go into marketing and managing it. There are those rare instances, often short-lived, when a free resource catches fire and makes an impact. Beall’s list was one such thing, but it didn’t last. So I don’t accept that a “revenue stream . . . obviously limits its reach.” I think it will actually increase its reach.

We’re at the epicenter of the industry in some ways here. Cabell’s will only exceed if they go well beyond the readers of the Kitchen, and they have a commercial incentive to do so via marketing and active sales, both of which will expand the reach.

Kent, can you share an example of when making something free limited its reach, and an example of when making something paid increased its reach? And can you support your contention that these are the more common effects of making things free/paid? (I think longevity is a separate issue, though of course something that dies young for lack of a revenue stream will have limited reach after its death.)

Sure, that’s easy. There are multiple cases of YouTube channels that have spawned major television personalities and shows, which gained greater reach once they were given the paid platforms of cable and broadcast. “Rick and Morty” started out as a free Internet short, for example, and now is a big paid sensation on Adult Swim, with a much larger audience. The same goes for self-published authors who gave their content away for free, had a small and devoted audience, and found mass market success after signing commercial deals. The author of “The Martian” is a great example of this. It only gained a large audience after it had a commercial future, which led to strong marketing support and a clear upside for all.

The problem with generating examples is that there are tons of publications or shows or materials you’ve never heard of because they are free. The recent discussion about the Oxfam publications on liblicense comes to mind. The publications you mainly hear about are paid, because they are marketed, and they are marketed because they are paid. Distribution channels, marketing to cut through the clutter and create awareness, and so forth — it’s all the more important in a crowded information space.

“Rick and Morty” started out as a free Internet short, for example, and now is a big paid sensation on Adult Swim, with a much larger audience.

That’s not really an apposite example in this context, though, is it? “Rick and Morty” reaches “paid” audiences in the sense that they pay for cable, but not that they pay directly for access to “Rick and Morty.” In terms of consumer economics, that’s a big difference. If the latter were the case — if people had to pay extra for access to “Rick and Morty” and began doing so in larger numbers than consumed it for free — it would be a much more relevant example in the context we’re talking about.

As for the self-published authors who initially give their work away for free, are you certain that it’s the freeness of the work that actually limits its reach? Isn’t it more likely that they were giving away their work before they got famous, and that the initial giveaway contributed significantly to their fame and made charging for their work more feasible later?

As for the difficulty of citing examples of things you’ve never heard of because they’re free: I feel your pain, but if you’re going to advance the argument that freeness can limit reach (let alone that it “often limits its reach”), shouldn’t you be able to support it with specific examples? To assert that the Oxfam materials are little read because they’re free is a pretty big claim. If there isn’t a way to support that claim, then doesn’t that suggest that factors other than freeness may be at work?

Rick, you addressed your comment to Kent, but I want to chime in, if I may. You asked if there are examples of under-distributed content that is lacking in distribution because it is free. I would say most free content in the scholarly area fits this description, but for a specific example, think of the current thread on liblicense about Oxfam. As for things that get more distribution because there is a price attached, think of Netflix and HBO. I really see this entire discussion as a red herring. The *most* distributed content is content that begins behind a paywall, with all the marketing and branding attached to that, and then becomes free later.

Joe, what I’m still looking for is evidence that supports the contention that freeness itself is what limits the reach of (for example) the Oxfam materials. Wikipedia, the Merriam-Webster Dictionary, Slate, and too many other free online resources are too widely read for me to accept as self-evident the assertion that freeness in itself tends to be a limiting factor on reach, which is what I believe Kent is asserting.

Rick, speaking as the former president of Merriam-Webster, I ask you: Do you have any idea how much money is behind the “free” Merriam-Webster Web site? It makes millions in advertising, and it has a staff of developers, Web optimizers, lexicographers, and sales people to make it work. Not to mention the aggressive PR campaign run by Peter Sokolowski.

Absolutely, Joe — I totally understand that Merriam-Webster is a very expensive operation to run. My point is that I strongly suspect that the fact it’s free for people to use has contributed to its expansive reach, not inhibited it. If end users had to pay for access to it, I strongly suspect it would consequently reach fewer people than it does now, not more people. (I’m completely open to evidence to the contrary, but I’m still waiting to see it.)

Agreed. It strikes me as a little strange that more people aren’t raising an eyebrow at the fact that something that was once provided for free is now being resurrected as a subscription-based product. I understand that it takes time/money/resources to do something like this correctly (which makes what Beall did all the more impressive), but I can’t quite shake the feeling that charging for something like this somehow goes against the spirit of it.

Good review of the product, though. However, just like we did with Beall’s List, let’s not go too far down the rabbit hole and fail once again to see the forest for the trees by arguing about marginalia and worrying that too many bad/borderline publications might be getting swept up as well. Indeed, that was the type of criticism of Beall’s List that always drove me crazy – here’s a guy who put in enormous amounts of work to create a really useful resource, the first of its kind, who endured years of ad hominem attacks and lawsuits in the process, and at times it seemed that all people could do was argue over the inclusion/exclusion criteria and criticize his methods when the vast, vast majority of publishers and publications on his list were exactly what he said they were.

Re: “marginalia,” that’s easy to say, when you’re nowhere near the fringes. One false positive is too many if you’re that one. As I’ve argued two years ago: “How much collateral damage is tolerable? That’s an ethical issue when aiming at a legitimate target (predatory journals), even if you’re nowhere near the line of fire.” https://scholarlykitchen.sspnet.org/2015/08/10/defending-regional-excellence-in-research-or-why-beall-is-wrong-about-scielo/#comment-59841

Thank you for the useful assessment and comments. Just a couple of thoughts in a personal capacity – to pick up again on the point of accessibility to this new Cabell’s directory: as predatory publishing largely seeks to associate with OA, attracting authors through the open Web, the set-up of the list as a not-visible-to-all resource could seem somewhat out of kilter? Large groupings of prospective authors, including many in less well funded institutions and organizations around the world, may lose out on this resource? Perhaps there will be arrangements made for these contexts? Understandably the new resource needs to be invested in to remain current and financially sustainable, but some way for greater outreach?

We hear directly from researchers, librarians, research officers, funders, journal editors etc in the Global South who face ongoing problems and challenges with issues of predatory publishing and awareness-raising of the issues among early career scholars and others. This was articulated too to some extent at the recent Publishers for Development and SOAS Africa conferences and at the Research4Life meetings. There are real needs around awareness and resources to help authors navigate the increasingly complex and insidious world of predatory publishing.

Also, what might we use as alternative terms for blacklists and whitelists?

Useful to keep the conversation going, hearing from a wide variety of stakeholders as to how to support knowledge sharing and skills development to navigate scam publishing outfits. The work of Think-Check-Submit is helpful in this regard.

Look forward to hearing more soon too from INASP and partners about its resources and guidance in relation to assessing journals for integrity as well as good practice.

Thanks for your thoughtful comments, Janet. I hope someone from Cabell’s will chime in on what they’re doing to maximize the availability of their Blacklist product. Personally, I don’t see anything particularly “out of kilter” about the fact that they’re charging for access — it would be wonderful if everything could be both high-quality and free, but most of the time quality does cost money. There’s no approach to this service (or any other) that doesn’t involve some degree of trade-off. In the case of Beall’s List, we got a got a somewhat sloppily-managed list for free because that’s all one person can do in his or her spare time. Cabell’s is offering something more rigorous and transparent, and I don’t think it’s unreasonable that they’re charging for it. (I say that, of course, without knowing how much they’re charging.)

I want to second what Janet said about concerns about the availability of this list to scholars in less economically developed countries. I just returned from Ethiopia, and I heard questions about OA/predatory journals from academic colleagues there. Specifically, they asked how they can afford to pay OA fees, and they weren’t always certain how to distinguish quality OA journals from predatory journals (if they even knew there were predatory journals). And, for folks who make $430 US a month, OA fees are beyond exorbitant. Yet, these faculty, many of whom have Master’s Degrees but not PhDs, are under great pressure to publish their research in international journals.

Some vendors have made their databases free to scholars in countries like Ethiopia, which is a tremendous help. Let’s encourage Cabell’s to do so as well.

Ran out of thread. Whether “Rick and Morty” benefits from a platform economy (cable) or another paid medium, the marketing to keep the money flowing and expand the audience supports awareness. The incentives are all there. With free, the incentives are not there. It’s that simple.

My experience is that making something paid drives sales and sales-oriented activities that expand reach and awareness. Making something paid definitely installs incentives to market works, create awareness, drive sales, and work to expand the reach. Cabell’s list will probably have greater reach than Beall’s simply because there will be an organization with sales and marketing people and channels driving awareness and working to secure sales. There is no mystery here.

Whether “Rick and Morty” benefits from a platform economy (cable) or another paid medium, the marketing to keep the money flowing and expand the audience supports awareness.

Agreed. So it sounds like you’re saying that free things tend to have less reach because they don’t generate revenue needed to market them — is that correct?

If so, then wouldn’t it be true that if marketing money came from elsewhere, the free availability of the content would not inhibit its reach, but would rather enhance it?

If so, then we’ve established that freeness actually does enhance reach, all other things being equal, which I think was the point. No one is arguing that all free resources have universal reach, and I don’t think anyone would argue that an unmarketed free resource will reach as many people as an effectively-marketed one.

Rick, we really don’t agree here. I think not only adamant advocates of free information would argue the open means more distribution but even moderate ones feel this way. Marketing is considered to be in poor taste, and so many people pride themselves as being (they say) immune to advertising. But of course even the most intellectually sophisticated people are like clay in the hands of Madison Avenue. I recall a dinner party of all academics except me, and everyone agreed that they paid no attention to advertising. Every single person present drove a Volvo. The problem with “all things being equal” when it comes to open or toll-access is that all things are never equal. Markets are created by marketing.

I disagree that free things tend to have less reach. Much depends on the defining of “free”. Network TV and radio were free to people, with the programming being paid for by advertisers (which marketed products that TV watchers bought, and round and round we go). FB is free, as is Twitter. These examples of media are very much NOT limited in reach (although we could argue about scope in a different thread). Of course, social media has more general appeal and utility than predatory publishing.

My concern regarding a paywall or paid model is lack of access for those who need it most. I do not underestimate the prevalence of sham publishers, con artists and grifters out for medical dollars; research publishing is a cash cow to them. I have medical residents come to me every month with the question of whether or not X journal–which has emailed them an invite to publish–is “ok”. Often these are FMGs who are new to this country, but sometimes it’s US citizens who are asking. They are far from stupid; they are inexperienced in this area, as are we all, really.

IMO, what is also needed is a tightening of the network of healthcare researchers and clinicians; more effort should be put into knowing your colleagues so that schlock suddenly published out of Y institution in X journal is not simply accepted (but I realize that this does not prevent the original grift). This presumes an ability to keep abreast of current research and faculty (globally), which is really an impossible task. But a familiarity with others in the field can only help in preventing some of this (or at least I hope so. I cannot even begin to address the researchers w/o integrity because that way madness lies… a very real and increasing danger).

So, Cabell’s can and probably will be purchased by the big tertiary research centers (we have heard nothing about how much this costs; I’m betting it’s not cheap). An annual subscription? Will subscribing allow access to past lists or is it for the current year alone? Will prior years’ lists be made available for free? The structuring of this revenue stream has huge impact on the entire industry, not just the institutions paying for the subscriptions.

IOW, where does it being a paid product leave the rest of us? I’m a hospital solo librarian with 2 libraries and seven residency programs to serve (along with attendings, nurses and ancillary staff). I have no budget for this resource. I have no time or resources to create my own list. I am not alone–if anything, I am the majority of librarians in healthcare (I don’t have actual numbers either way). What’s a mother to do? Beall’s List at least attempted to inform all fellow librarians of these dangers. Cabell’s, as proposed, is not helpful to me at all. Which raises this specter: if Cabell’s moves to ad revenue to support more access, and if publishers buy ads in it….how quickly can you say “conflicts of interest”? It’s a conundrum, for sure.

Just wondering if, in the planning for Cabell’s Blacklist, the date-sensitive aspects of moving into or out of good standing were considered as a data point? That is, if a publisher mends past predatory ways and is removed, I might not know that issues from a certain time period are sub-par; on the flip side, a once reputable journal that goes astray would get all of its content smeared, unless status is tracked and detailed over time….

I surely hope there is (or soon will be) an API. Without an API it will be mostly impossible to build the resource into library workflows or build tools that raise user awareness in the context of ordinary interactions.

Five days ago I wrote to Cabell’s to ask about inexpensive pricing for developing countries’ libraries. I also asked how to find out if INASP’s JOLs (Journals OnLine – see: http://www.inasp.info/en/work/journals-online/current-jols/) were listed or in the queue for review, since we have no access. So far there’s been no reply. (Also, I personally am intersted as one who’s asked to review articles for journals, and I found Beall’s list useful as a starting point — but as an individual I have no acess either.) So, we shall see. Meanwhile, here’s a comment, following on Andy Nobes’s. At INASP, one of Andy’s areas of responsibility includes the JOL programmes, in which partners in developing countries produce just under 400 regional open access journals. INASP has a comprhensive grading scheme for these journals. Sioux Cumming, also of INASP, described to me via email the INASP Journal Publishing Practices and Standards (JPPS), as follows: “At the moment JPPS will be only for the JOL journals because we have the closest contact with them and we want to help the editors to improve the quality of their journals. The project has a dual purpose: for researchers and readers, the JPPS levels will provide assurance that the journals meet an internationally recognised set of criteria at a particular level. For the journal editors, the detailed feedback from the JPPS assessment helps them to identify ways to improve their publishing practices and standards with a view to achieving a higher level at the next assessment…. our long term plan is to use the JPPS assessment process with other organisations from the south. Our thrust is definitely educative and informative rather than just a ‘white’ or ‘black’ list.” Have a look – it’s an impressive and useful effort — well defined scope and excellent.

INASP JPPS is a wonderful resource, thanks for sharing:
I also found INASP Editor’s Resource Pack useful:
The story behind the development of these resources must be fascinating!
I’m only aware of similar practices in the SciELO network:
“Criteria, policy and procedures for admission and permanence of scientific journals in the SciELO collection”
– 2004 version, in English: http://www.scielo.org/php/level.php?lang=en&component=42&item=2
– 2014 version, in Portuguese: http://www.scielo.br/avaliacao/20141003NovosCriterios_SciELO_Brasil.pdf

Cabell’s Blacklist Violations will be useful as a check-list. It can be of benefit for genuine journals published in developing countries. It distills what’s expected for a journal to be considered professional in a Western audience’s perspective. A lot of these journals are ran on a best-effort basis by university faculty, library staff, post-graduate administrators, scholarly societies, etc., most of which have no formal training in scholarly publishing.

Thanks for these thoughts, Felipe. They point up a very important improvement that I would like to see with Cabell’s Blacklist: a clearer distinction between journals that are not being run according to the usual standards of professionalism, and those that are actively seeking to deceive authors and readers. Those in the latter category are doing something much worse than merely failing to live up to what’s “considered professional in a Western audience’s perspective.”

Many thanks Rick for the excellent review. I would like to know if there are any subscription journals listed in the blacklist or not?

Do you think that in the future Cabell’s Whitelist will change the scholarly ecosystem in terms of considering the Impact Factor the only way to evaluate the researchers and the research?

I would like to know if there are any subscription journals listed in the blacklist or not?

I don’t know — in order to figure that out, one would have to examine every journal listed.

Do you think that in the future Cabell’s Whitelist will change the scholarly ecosystem in terms of considering the Impact Factor the only way to evaluate the researchers and the research?


The logical extension of Cabell’s database is a CV rating service, much like a credit check. A university considering hiring or promoting a scientist would send their CV to the rating service. Each claimed publication would be checked and evaluated in a report. A dodgy publications record would be quickly outed, allowing the tenure committee to claim due diligence.
A CV rating service would hit predatory publications right at the source of their revenue: authors who, for whatever reason, pay their fees.

If anyone is seriously interested in starting such a business, I’d be interested in talking to them. It seems like a real opportunity.

This is an interesting idea. However, I know from recent experience that it can be really time-consuming sifting through CVs of international researchers to assess their publications, and make an informed decision on the quality of their outputs. Sometimes you are lucky and have a tech-savvy researcher who has embedded DOI hyperlinks, or links to a fully-completed ORCID/Google Scholar profile. But often you need to do some additional googling and investigation. Similarly, it can sometimes be really easy to scan down CVs and spot researchers who have been ‘careless’ with their journal selection, but it can also be very complicated – there are many grey areas, especially around regional journals, (unless Cabell’s are going to assess all current journals in existence), defunct and non-English outputs. In principle I think it’s a good idea, but it might depend of the skill of the assessor (I’m assuming this can’t be fully automated), the quality of the data available to them, and the quality/uniformity of CVs they are assessing.

Thompson Reuters routinely parses citations to get the IF, so it can’t be that time consuming. Most references to journals on Cabell’s lists, legit and otherwise, probably could be verified quickly. You’re right that a few oddball cites might take some serious time to check.

I would distrust any list that has a commercial background – as soon as someone’s aim is to make money, they will always err on the side of profit. Isn’t that why we scientists now must disclose funding sources and other potential conflicts of interest?
That point (amongst others) was the very reason that Beall’s lists were so very valuable – they were published as a not-for-profit enterprise. That is also what makes the pay-to-publish model of open access automatically suspicious and so extremely dangerous to science. A pay-to-access list is in the same category as a pay-to-publish journal.
Unless we can get away from this commercial approach to science we are reinforcing the current ‘post truth era’ that is propagating ‘fake news’, ‘alternative facts’ and unfounded statements. As scientists our primary responsibility is to come up with facts, not profit.

Comments are closed.