Overlooking the need for paid Editorial Office staff hobbles many attempts to reform peer review.
The beginning of the holiday season means it’s time for our annual list of our favorite books read during the year. Part 1 today, Part 2 tomorrow.
Open data is gaining ground, but is there a revenue stream that would help journals recover the costs of gathering, reviewing and publishing data?
Comparing the length of post-publication peer reviews in F1000 Research to those done pre-publication in four major medical journals shows authors are less likely to receive constructive or substantial criticism with F1000 Research reviews, despite a highly academic reviewer pool.
Data archiving is becoming a new normal for scientific publishing, but a recent study shows you need to do more than just ask for it.
The power and identity of Reviewer 3 springs from the shadows to ensnare the unwanted paper. But is it really a powerful spirit? Or just Dad in a mask?
Conventional wisdom has well-known researchers getting more and more requests for reviews, leading some to suggest the system is broken and about to implode. Yet, when real-world data are analyzed, some surprises emerge.
When authors think peer-review is about their chances of acceptance rather than the quality of their paper, it can lead to the wrong expectations and unproductive behaviors.
Is plagiarism of fiction less of a problem for publishers? Another tale of pilfered prose seems to indicate that checking for plagiarism isn’t something book publishers care about . . . yet.
Allowing authors access to anti-plagiarism software makes pragmatic sense when you consider the demands scientific journals place on authors for perfect English, the pressures of group authorship, and the incrementalism of most papers. Perhaps it could even do more.