English: Pool on Herricks Burn The deep dark p...
English: Pool on Herricks Burn The deep dark pool created by the dam on the Herricks Burn. (Photo credit: Wikipedia)

It is vitally important that we bring an end to this crisis of trust before it spreads any further, that we bring back order, fair dealing, and trust. . . . Technology, market structure, and new products have evolved more quickly than our capacity to understand and control them. The result has been a series of crises over the past few years that have caused many . . . to lose confidence or to think that the whole system is a rigged game.

The quote above is not drawn from one of the recent scandals or stings in scholarly publishing, but from a book called, “Dark Pools: The Rise of the Machine Traders and the Rigging of the U.S. Stock Market,” by Scott Patterson. This excellent book is even better than Michael Lewis’ recent “Flash Boys,” giving a broader historical perspective on the development of the technologies that are now levying hidden taxes on investors in a way that would make August “Gus” Gorman proud.

Patterson is quoting Thomas Peterffy, who was one of the innovators of electronic trading, but who viewed it as solving problems and delivering fairness to the market. Over time, as the unintended consequences mounted, its true effects — a more opaque market filled with deception and theft, a market that no longer fulfilled its intended purposes but had been perverted by the very speed he thought would be its salvation — had changed his mind, and he and others began warning that the market was no longer serving its intended purpose.

This may sound familiar. For 20 years, scholarly publishing has been responding to what some term “the digital revolution.” Promising an end to scarcity and delays, the technocrats — and I count myself as among these to a point — believed all the benefits of abundance and speed would only help science advance. Rapid publication practices were established, online-only journals and journal sections were pioneered, and online manuscript submission and editing became commonplace. In addition, movements like open access (OA) came into being.

Now, 20 years on, we find ourselves with an OA environment known as much for inadequate and exploitative publishing practices as for any increase in access; with a growing number of retractions; and with experiments that purposely push normal controls to the side in order to increase the throughput of the system.

Strong OA journals are undermined by this unfortunate situation. Worse, more subtle and clearly addressable limitations and problems with OA aren’t front and center because these practices are such an obvious blight on the industry. Publishers big and small have been caught up in stings showing their peer review practices are inadequate. The “pay for play” label is starting to stick to OA, and it may be hard to remove. And the industry as a whole is getting a black eye because we aren’t able to put in the kinds of controls readers and the public expect from us.

What incidents have damaged our collective reputation? Well, they range from the infamous “predatory publishers” to the more subtle lack of adequate review revealed in various stings, from the more widely publicized John Bohannon sting published in Science to a variety of individual tests of the OA journals market which have led to standalone stories of journals that publish nonsense for money.

No serious publisher — no matter the business model — has any incentive to see these practices continue. There is a risk to the overall reputation of scholarship at play, and publishers are a major public face of scholarship, as well as providing important third-party validation functions. If both are damaged — i.e., if a major outward-facing aspect of scholarship is deemed no longer trustworthy and if the third-party validation step is deemed of marginal value — irrevocable damage to scholarly communications could ensue.

This is another way of saying that the big brands caught up in the Bohannon scandal have a lot more at stake than a few hundred authors paying APCs. Meanwhile, more careful OA publishers and editors are tainted by scandals like this. Non-OA publishers see a problem in their professional ranks, and it reflects to some extent on them, as well. And for the OA movement overall, incidents like this have a chilling effect on the business model, which has incentives that encourage throughput. More broadly, our industry’s ability to handle innovation is called into question. To some, it may seem we can’t innovate to provide public access without compromising quality or allowing exploitative practices to emerge on a widespread scale.

Interestingly, a recent study suggests that the problem of trust is not one scientists see as their problem — that is, they have ways of establishing trust and filters which naturally steer them away from shady publishers and publications. So, the problem is really ours to solve, because it is the reputations of publishers, editors, and reviewers currently doing good work which are at stake.

What can our industry put in place to create incentives for publishers to avoid shady practices?

Directories of OA publishers have a place in this, but these directories suffer from the same limitation, but with an added loop of complexity. That is, users need to know to look at a directory before reading a paper. And the question is now, Which directory? There are a few, and their quality varies. Codes of conduct can help here, but again, which code? And how does compliance get carried forward (and lack of compliance communicated)?

The economics are one area to examine. Shady practices are not limited to the OA business model, as the “Elsevier fake journals” scandal showed. In that case, advertising and sponsorship dollars tempted a regional executive to skirt ethical publication practices. The shady practice was discovered, and the responsible party punished. But the temptation still exists.

Gold OA presents a true conundrum here, as it is the model many predatory publishers use, yet it can and does support good journals, as well. As I noted in a recent post, Gold OA faces a system-wide challenge from the emerging acceptance of unpaid embargoes to support public access, so the incentives for fly-by-night Gold OA publishers may begin evaporating. This may be an opportunity quality Gold OA journals want to push, a timeframe to differentiate themselves.

One alternative is simply persist with stings and scandals, in hopes that this ultimately makes the overall culture wise up. This seems a painful path with too much collateral damage. It also seems unlikely to work. To return to the analog of financial markets, the 2008 meltdown and the 2009 “flash crash” had slight and temporary ameliorative effects on the markets and their regulation, but the trends toward the opaque high-speed trading currently dominating the equity markets were not deflected long. Trust eroded, but larger trends made the markets impossible to avoid for people with 401(k)s and mutual funds. Our own culture’s escalating “publish or perish” culture seems to be exerting similar unavoidable pressures.

It’s also important to remember that publishers aren’t the only ones pushing boundaries. Authors are doing their fair share, as well. In a handful of meetings recently, I’ve heard this discussed both publicly and privately — the increase in retractions, the increase in fraud and plagiarism occurring across disciplines, the increasing rate of problematic submissions, and so forth. Just like Wall Street and the drive for profits, academia is pushing so hard on the prestige and publication paradigm that it is warping in response. Authors and publishers alike are bending the rules to relieve the stress.

This has been a long-term issue, with Phil Davis writing in 2010:

. . . academia is a highly conservative system, largely determined by disciplinary norms and organized around external peer-review and assessment. Starting from this premise, the resultant lack of scholarly engagement in radically new forms of publishing should not be that surprising.

The finding mentioned above that academics themselves don’t view this as a problem they can’t just ignore underscores the point.

I wrote about this in 2012, stating:

If the goal is to change the culture of publishing, those who wish to do so will need more than technology and its superficial effects. They will need to go deep into the incentives, cultures, and habits of mind over generations.

There is a temptation to write this all off as evolutionary — a long-term shift in the market that will ultimately result in an improvement. This is to me a perilous rationalization, as it allows those of us currently running this industry to abdicate our responsibilities in making sure we run it responsibly. As the financial industry has found out, capitulating to technological trends and allowing them to replace the inherent value propositions the public expects can lead to real trouble. It’s subtle, as you may never notice the tens of thousands of dollars missing from your retirement accounts, eked away pennies at a time. The trust problem with scholarly and scientific publishing is subtle, too. But it’s unmistakable, and it needs to be addressed.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.


52 Thoughts on "How Do We Address the Continuing Problem of Inadequate and Deceptive Publishing Practices?"

There is a simple solution that would largely solve this problem and I believe should become an expectation for a peer reviewed journals whether they are OA or subscription. That is publishing the review history for all published manuscripts. Here is an example of one of my own articles in PeerJ.


With the review history published any question about the quality of the peer review is open for anyone to evaluate. Whether or not to make the academic editor or reviewers’ name public (as the editor and reviews chose to do for may manuscript) is another matter that should (as it is done by PeerJ) be left up to them.

Publishing the original submission, reviews, editor comments, author revisions and response for each round of revisions would clarify the quality of the review process for everyone to see.

I see very little downside to such a system and one that would largely make the problem of low quality sleazy journals go away or at least be exposed for what they are.

The downside is that this is a laborious and expensive solution to what may not be an important problem. Publishing every reworked version of a submitted paper plus all the reviews that cause the rework is not trivial. The volume of publication would be much larger, perhaps five times what it is today, maybe more.

There is virtually no labor, the process is baked into the journal management system. All the review material is in the journal management system anyway. When they publish an article the review material just gets published along with the article. I am sure there is a way to pull reviews that are just so bad they shouldn’t be made public but that could easily set up so the academic editor just checks a box to leave a particular review out.

As for disk space it is trivial. A typical manuscript is about 3 megabytes and the reviews are all straight text so you are talking about maybe 10 megabytes additional space on average. I think 4 terabyte drives are running about $150 these days.

It is not a question of disk space, but rather reading about your typos, etc. I do not see raising the sea of content as helping.

Your assumption that there is no labor is not correct. Someone would have to manually check that all copies of all reports/files are in place, assuming there were reliable methods for the files to be automatically gathered in the first place. Also, who’s to say what reviews ‘are so bad’ they shouldn’t be made public? This would lead to widely varying practices across publishers and so would be generally useless.

If one agrees that Directories of practitioners of deceptive and inadequate publishing practices have a potential place in establishing the decision-making behavior of publishing academics (here, one assumes that Beall’s List is one such main such Directory), then practical suggestions for its improvement might be added to these discussions, as well as counterpart lists such as the Directory of Open Access Journals (http://doaj.org). Agreed, many academics have no tools to know such directories exist, but we might try to continue cite the major directories which do exist, for better or for worse in excellent discussions such as this.

Kent very interesting position.

As a contrarian I have to ask is it our job?

We are a business and no different from the stock market, banking or automotive. All of the aforementioned are governed by governmental regulations as are most businesses. The reason being because some unscrupulous people engage in business. Perhaps it is time to seek the regulation of STEM publishing. The we in “we” have no power to enforce regulations or create them.

Actually there are lots of informal forms of self regulation by communities.

Yes there are, but Kent seems to be calling for more.

Personally, I would not like to see the government involved. Much too messy. I think the unscrupulous will go the way of the dodo bird. Scientists are pretty smart and limited in number group. One gets burned and the community knows rather quickly.

“Is it our job?” is a question that, I think, is fundamentally reactionary and not designed to result in useful answers. (It’s a rephrasing of the more baldly reactionary question “Does taking up this challenge mean I would have to deal with change?”.) I think a better question would be “Does this need to be done?”. If the answer to that question is yes, then the next question is “What can I do to help bring it about?” The answers to that question will vary depending on where we are in the system, but I think this line of questioning keeps the focus on the problem, rather than on preserving our traditional practices.

Perhaps the lack of trust is the solution, not the problem. That is, APC publication of one’s article may simply be devalued in the marketplace of ideas, compared to subscription publication. APC journals may be the proper venue for people who are not in a publish (well) or perish situation. For example, most author’s publish only one paper. Then too there are those with tenure. So it is not clear that additional regulation is required.

There have been a few ideas floated at conferences recently. John Bohannon and Ivan Oransky have apparently been discussing how there should be an “IRS” of publishing that audits publishers on some sort of frequency to determine who is doing good vs. evil. They discussed that at the STM meeting. At the Council of Science Editors meeting last week, there was a suggestion that COPE or CSE take over the work being done by Jeffery Beall. COPE declined on the spot saying that they serve publishers and not authors and this would be an author service. CSE is not likely to maintain a black list, however, the Editorial Policy committee is working on a white paper for authors about what to look for in a quality publisher and what to be wary of.

I do not think that publishers need to make peer review 100% transparent in order to “win back” trust. For many fields of research, this is a non-starter.

You ask, “How do we address the continuing problem of inadequate and deceptive publishing practices?

I see that the OMICS Publishing Group is listed as a supporting member of SSP. See: http://www.sspnet.org/about-us/organizational-members/

In my blog, I have documented the abusive practices of this publisher, practices that have directly hurt and cheated numerous honest researchers.

Why do you allow such a publisher to be among your members? The publisher uses this membership to claim legitimacy and rip off more scholars.

In December, The Scholarly Kitchen published a mean blog post that attacked me personally. SSP attacked me for warning scholars about scholarly publishers while including one on its membership list.

So, to answer your question, I would recommend that SSP (and other scholarly publishing organizations) purge their membership lists of predatory publishers and stop attacking personally those who are trying to warn scholars of the perils of the predatory publishers.

I know nothing about OMICS, but your remark that the Kitchen attacked you personally is off base. I wrote the post you refer to, and it is mostly commendatory. Read slower.

Right, then I’d like to write a guest blog post here entitled “Parting Company with Joseph Esposito” and then falsely brand it as commendatory. Stop dissembling.

Speaking as a chemist, when journals were first created and even now, one of the major functions was collegial communication. Often reviews required that the experiment was recreated by colleagues to validate the discovery. Today, much of that can not be done for obvious reasons. This forces peer review into a different dimension. One does note that over time, the ideas are tested and validated or discarded based on experience.

The problem at hand is the default use of these articles for a more deadly purpose- pub/promotion and tenure. This becomes the dominant raison for the rush to publish and puts everyone at fault for that current status of journals. It is this, perhaps more than the fault exposed by Bohannon, Sokal and others over time in all areas, science, humanities and social sciences.

It is that latter that serves as the lock for publishers regardless of payment form. And here is the bottom line. As the little storry about food distribution shows, Massive storage in the cloud and the increasing requirement by publishers to provide “proofed” copies with print on demand, only, the entire publishing enterprise can conveniently collapse except for the lock on specific journal titles, easily bypassed as we see with OA.

Kent is right. As long as the publishers can convince academics of their value, they will persist with a patchwork of “fixes” to the trust model. The trust problem is a signal that there are cracks that can not be pasted over as a temporary fix. EBSCO’s purchase of Plum and a few other recent apps such as SocialCite are fair warning for those below the dam.

Digital publishing allows us to go well beyond pre-publication assessments. Thus, ePublishing enables constant and continuing evaluation of what has been published. As we develop the metrics for such a system, we will need to be especially diligent to insulate those processes as best we can from the corrupting influences of profit and prestige (which eventually translates into remuneration).

Trust and reputation have been big topics at Research Square lately. One of our key considerations in the development of JournalGuide was to create a central location for journal information, across disciplines, around the world, to help authors and readers navigate an increasingly complicated landscape.

JournalGuide is intended to be inclusive – and to combine industry and aggregator knowledge of journal quality with researcher feedback of actual experience.

Since Research Square is author-focused, we can be objective, independent and transparent. We’re looking at the work done by organizations like OASPA and DOAJ to police their content, as well as the coverage and selection criteria for major (and minor) indexes as one type of community input – but we are also actively collecting feedback and ratings of authors who have submitted papers and who are in a unique position to aid their colleagues. Rather than select-out, or blacklist journals, we want to surface journals that are earning and maintaining the endorsement of several of the stakeholder groups.

Instead of creating new hoops for journals to jump through, we will provide something more like an audit, verifying the journal’s credentials across multiple existing indices and qualifications. Our goal is to provide researchers with an objective and comprehensive filter, and to highlight truly legitimate journals.

As far as I know, there is no trust problem for university presses. As creatures of their parent universities, they have boards consisting of faculty members that oversee their activities, and some also have external advisory boards including members from reputable publishing houses elsewhere. Any press that belongs to the AAUP has to go through a rigorous process of application where submissions are reviewed by a subcommittee of the AAUP board of directors (on which I served two terms). As a result, the products of university presses can be trusted to have all been properly vetted, and their reputations reflect the reputations of their own universities too.

Harvey Kane’s post, above speaks directly to this issue. Academics are miners of ideas. Some are of high quality and others lesser. The miners are under pressure to produce (publisher articles). The publishers, whether university presses, foundations or for-profits, are the factories that process the miners’ outputs for sale. These publishers have an imbedded infrastructure that needs to be
sustained. They have a production schedule to be met and, like any publishing business, they have a “back office” that creates an overhead. They, like academics, need throughput on a timely schedule. Both are caught in an expansion death spiral needing to constantly feed the system.

I do not believe that this was the intent of the original corresponders and founders of journals. If one follows the foresight literature, especially in the arena of “big data”, there is absolutely no need for this ever speeding treadmill to continue in its history encrusted, hand-crafted form, particularly for publishers and, for other reasons, for academics.

There are lessons to be learned from the writings on disruptive innovation. Being “best of class” may not guarantee survival.

The crisis is not because there is too much gold being mined. The problem is that there is too much pyrite being passed off as gold, etc. Publishers, as the storefronts, should do a better job of labeling which is which.

I am not sure why you think university presses are subject to this dynamic of ever-expanding output. Most presses, in fact, have not grown significantly larger year over year. In 20 years heading Penn State University Press, our number of journals increased from only eight to just a dozen, and the frequency and size of the issues did not increase at all.

I think the university presses publish what they can do well and do not push the envelope. Of course they are not beholden to investors who want ever more.

It is vitally important that we bring an end to this crisis of trust.Technology, market structure, and new products have evolved more quickly nowadays.

I am not too sure of who doesn’t trust what and whom.

I agree with Kent, publishers have to do a better job of promoting(?) their brand(s). ACS has always promoted the fact that they publish about 10% of what they receive and their membership is informed of this fact. Thus, one thinks twice before sending an article to ACS and their journals are considered among the finest published. It is interesting that those journals with sterling reputations never have a problem meeting their publication schedules while others often struggle or publish a lot of pyrite.

ACS, Springer, Elsevier, and particularly Nature, Science and most journals with high “impact factor” are corrupting science and research.
Given that the impact factor is a flawed measure, their corruption is proportional to their impact factors. In other words, the high impact factor, the most corrupted journal/publisher.

In my humble opinion, scientific publication is getting corrupted, particularly by the so-called “leading journals” and “prestigious” scientists.
They think that “good science” is that one done or published by them only.
The open reviews would do more harm than good because it would artificialize the comments/evaluation.
An ideal evaluation system may be: known reviewers/unknown authors as it is described here:

I am an editor at a “leading journal”. I do not think that the only “good science” is that published by my journal or others with equally high or higher reputations/impact factors. Please refrain from attacking people that you clearly don’t know.

I forgot to add the the list of corrupting science the following publishers: Oxford Journals, Wiley, BMC, The Lancet, JAMA, BJM, Annual Reviews, RSC, Trends, and many others….
These are really the black spot in science

You also forgot PLOS and their glamorous journals, with PLOS Medicine clocking in with a 15.253 Impact Factor and PLOS Biology coming in at 12.690. I assume you are calling PLOS “corrupting” as well, otherwise you’d look like a complete hypocrite.

Mr. Rorer:

As soon as one publishes and the paper is put before the public it is open to review by whomever. In fact, articles have been retracted because of the scrutiny. Thus, I wonder if you are just tilting at windmills.

It is hard to compete with the best and the brightest. For this reason many leave the competitive arena of “big science” and decide to teach or go with industry. I have known both kinds of scientists and both are dedicated to their art and both are realists.

However, I have known none who corrupt science nor endorse its corruption.

Dear Kane,
I meant by “open review”, the reviews (the reports) made by peer-reviewers who decide whether an article should be published or not.
I was not talking about the public comments/reviews made post publication, when an article is put before public, but the reviews prior that.

Sorry, David, you are right! PLOS is of course in my list of “corrupted/corrupting” science!
All arrogant journals/publishers who think that with their flawed high “impact factors” they are the leading science are in my list, including PLOS, Current Opinions, Annual Reviews, Blackwell, Oxford Press!
The list could be long, indeed.
Look here, how they are humble and modest:
They call themselves as the “Golden club” but they must be called the “private” or the “mafia club”.
Mr. Beal, Jeffrey, should make a list for those arrogant publishers instead of his so-called “predatory publishers”.
The true predatory publishers are the big publishers, such as the above, but Mr Beals seems to have a problem of semantic order about the term “predator”. He describes small, poor and miserable publishers as “predators” while the real predators are those greedy and wild publishers who predated long time till became monstrous.

What you’re doing here is what physicians call “treating the symptoms rather than treating the disease”. You’re shouting at publishers for providing the very services that the research community has asked us to provide. The problem you seek to correct stems from academia, not from the publishing industry. If Nature stopped having an Impact Factor tomorrow, the problem would still exist. If every academic institution on earth stopped caring about the Impact Factor, no journal would bother with it and your perceived problem would be solved.

Perhaps instead of directing your anger at the service providers, you should instead consider pointing your finger at those demanding the service.

My deprecating of the publishers does not dispense academia/researchers from their responsibility/corrupted practices.
I have no doubt that academia and funding agency, evaluators…etc. are complicit with publishers.
The impact factor, in my view, has become a “publishing disease”. The problem would still for a while because many researchers are naive! I know some of them who are attached to the impact factor in a pathetic way! She is not able to see the numerous flaws and defects of the impact factors!

You might try talking to those researchers and asking them why they are attached to the impact factor. I suspect that very few would tell you that they care about it because they want to make publishers happy. Instead, they will tell you that they care about it because it is required for funding and career advancement. Publishers are not the ones making decisions on funding or tenure/hiring.

It’s very easy to find someone else to blame when you are the cause of your own problems. Blaming a scapegoat is much easier than taking a long hard look in the mirror. Blaming publishers for following the instructions of academia is not only misguided, it’s pointless. This is a problem within academia, and not a problem that publishers can solve for you.

Kent’s metaphor re: pyrite” meshes well with David’s “symptom/disease” metaphor. The best example from medicine is a sore throat in children. If a simple test for strep is not made and it is present, then the problem can lead to very fatal heart problems and death. The self-policing” via “peer review” is a flawed test for many of the reasons cited above. As Harvey Kane has pointed out, and has been actually verified via an international survey, many academics would rather teach than do research in the first place (The rise of adjuncts is an interesting phenomenon in the US).

I would also refer to the “invisible college” which basically points out that much of what is in print has been known by those “in the loop” prior to publication and that publication is for “public” consumption. May I note Ann Schaffner’s simply put article, The Future of Scientific Journals:lessons from the past.

The above, being said, publishers, as “factories” need production and are thus dependent on raw materials, articles to meet production schedules and all other business elements whether for or not-for profits. Many geared up, like US universities post WWII and Korean Wars where massive expansion with no end in sight occurred. And now publishers have defaulted by accepting a self-serving peer review test to basically absolve themselves of or mitigate the responsibility for the quality of their product.

As with undetected strep and rheumatic heart disease, there are, in academia, and there will be in publishing, consequences. Thinking of using post publishing apps to drive the “lay” public to journals, think of the changes such as Wikipedia and social media. Post published apps that measure “impact” in various forms are palliatives, like Calamine lotion on chicken pox.

In order to push thinking, we know that Amazon sells published materials but also materials that they produce. Amazon also maintains one of the largest “big data” servers and sells analytical services to the US government and others. IBM’s Watson or lesser systems with AI are now working those databases and making critical evaluations even in medicine and, maybe “killer robots”. There be dragons here.

I guess the arrogant publishers are the ones who have not published your works.

You seem very bitter. Why?

As a publisher we simply publish that which is accepted by the editor after review. I guess all the reviewers involved in a conspiracy. Kind of like the President really being Kenyan and not a legitimate holder of office.

I am sure you have solid evidence of this vast conspiracy which involves the scientific community, the journal publishers, the federal government and other providers of grants. I am surprised it has not made the news, but I am sure you are writing the expose.

Sorry, but this has nothing to do with what I said.
I said “complicit” in the sense they attach a high importance to flawed and biased measures, not in the meaning you understood.

I stand corrected, and hope you forgive my statement. I did not use the word arrogant in my post.

Hi Rorer,

The email address you have used for your comments is invalid, so I cannot respond to the question asked in your latest comment. Please send along a working email address (I’ll keep it private) if you wish to discuss further.

Eventually with time, it is inevitable that things change. Publication also has taken a dynamic change, it is thus certain that a myth of corruption is inherent in all those who are seeking to get their academic document published. However, it might not be true for all. A ray of hope and faith should always exist somewhere within our pessimistic thinking. On the other hand, open access can doom the real value and content of the document but it is better to have optimism & trust build up somewhere as one may get the right option somewhere.

I am curious as to what is “the real value and content”. As with the produce metaphor, the distributor justified value of their service by validating “quality” since their distribution services were not needed or significantly reduced. Harvey, in remarks above said that they just publish what has been approved via peer review.

And therein lies the flaw in the system. If the journal just publishes what has been approved via peer review then retractions, for example, need to be made by the editorial board and signed by said members. They need also accept any kudos and be promoted to public acclaim by the publisher. Neither party can escape via anonymity or public promotion without the other. As I note, above, peer review in the current world of academic publishing is, to parallel early medical analogies, a test with increasingly false positives. This will start to show up even more as social media picks this up in post publication reviews and analysis. And it matters little the color, green, gold, white or blue open access/traditional models.

It is interesting that Plum Analytics tracks a large number of social media and previously dismissed venues where comments and analysis appear. And increasingly, publishers’ rolls are shifting with the diminishing of the actual function of pursuing the tradition encrusted process of actual publishing and increasingly filling the role of marketing and promotion.

“Big Data” and intelligent “bots” able to interpret and argue present a disruptive force. Now academics can not only argue the merit of e-learning systems, including MOOC’s but the possibility of their promotion and tenure being placed in the hands of peer review by a committee of bots. And just as this will make universities committees unable to default to journals as a convenient path, it will challenge the business model (profit or non-profit) also dependent on this path as a sinecure for their survival.

Let’s stop putting lotion on the surface itch and respond to the etiology. As with strep and other such medical issues, it’s too late to restore the past.

Harvey Kane said: “It is hard to compete with the best and the brightest. For this reason many leave the competitive arena of “big science” and decide to teach or go with industry”…..
You forgot….or they become editors or journals publishers.
For you, “big science” is research only?
Industry that produces the real impact and translate the research outputs into concrete applications is “small” science?!
For you, teaching and conveying knowledge is a “small” science but working in lab sometimes for “fun” is the big science?
If we are a little bit lucky and get “big results” (to use your term), then it is “big science! If not, it will be small science! Irrational reasoning, of course. There is no big or small science, dear Kane.
Discovering a tinny or a very small aspect of a think would be as big science as the biggest science!
For me, on the other hand, dustman and researchers are at exactly the same human value.
A researcher, as brilliant as is, is not more valuable than a dustman.

Teaching is a wonderful profession and going into industry is too, however they are not geared to the scramble of grantsmanship. Industry is geared to an end product there is little primary research in industry these days. The pressure is to produce a product within a given time frame.

I in no way meant to degrade the career paths of those who do not want nor found it suitable to pursue a career in academia.

Whatever you do, do it well. I was never a scientist but found myself publishing science.

Every person is important and what they do is important. We would be living in a world of dust without dustmen!

However, your railing against those who publish and publishers with the implication that in some way there is collusion among them to prevent others from publishing struck a nerve.

Should one choose to enter the world of competitive academia one knows the risks and the rewards and what it takes to achieve in that world. Also, one knows the importance of the IF to their career. Your scientific group does not produce a product or improve on one within a given time frame and the group will be disbanded and some will stay and others will be shown the door. The same goes for other career paths, miss your budget numbers and you will be fired. Don’t dust and you will no longer be a dustman!

Comments are closed.