Major scholarly publishers have made substantial investments in preprints in recent years, integrating preprint deposit into manuscript submission workflows.
With a new partnership with F1000, Wellcome embraces sketchy peer review standards, deep conflicts of interest, and financial support of a private, commercial enterprise. Worse, the entire thing seems redundant, avoidable, and unnecessary.
The design and construction of article performance measures can reveal deeply held biases.
Articles are published before they’re reviewed; doubts about a paper are viewed as a positive status; papers only need to contain “science;” review and revision can continue forever; and PubMed Central is their certifying entity. Welcome to the world of F1000 Research.
Comparing the length of post-publication peer reviews in F1000 Research to those done pre-publication in four major medical journals shows authors are less likely to receive constructive or substantial criticism with F1000 Research reviews, despite a highly academic reviewer pool.
Expert ratings have poorer predictive power than journal citation metrics, study reveals.
With the creation of Rubriq, co-founders Shashi Mudunuri and Keith Collier have broken new ground. Rubriq is an attempt to provide peer-review independent from journals.
The rankings of journals based on F1000 scores reveals a strong bias against larger journals and those with little disciplinary overlap with the biosciences.
Does the release of a journal ranking metric signal a change in vision for post-publication peer review?
Post-publication review is spotty, unreliable, and may suffer from cronyism, several studies reveal.