Artificial intelligence outperformed human editors in selecting high-impact papers, a Canadian software company claims. Really? Then show me the paper!
A recent study finds that academic press offices exaggerate claims in their press releases about published research. Worse, the vast majority of these find their way into subsequent reporting.
Revisiting an attempt to list the things journal publishers do.
Authors should not be surprised when their open access articles show up in surprising places. Is it possible to embrace open access with some restrictions?
Narrowing the definition of peer review to only validation standards, we may be exposing peer review in its least flattering light, while ignoring the more reliable and powerful ways in which peer review serves science.
An attempt to list a bunch of things journal publishers do. It’s not perfect, but it’s better than nothing.
There’s much more to making “post-publication peer-review” work, much less a valid form of peer-review. Rebranding comments and letters isn’t sufficient. Maybe it’s time to recognize over-reach.
All primary data should be made openly available, a UK government report recommends.
The expenses publishers incur rejecting papers and book proposals are about more than filtering.
National Academy of Sciences members contribute the very best (and very worst) articles in PNAS, a recent analysis suggests. Is diversity a better indicator of success than consistency in science publishing?
Are older reviewers more cursory in their reviews? A study by the editor of the Annals of Emergency Medicine suggests as much.
Providing incentives to reviewers may be key to improving the peer review process.
“I have seen the future, and it doesn’t work.” — John Senders, pioneer of the electronic journal
A new study shows conflicting results over whether scholars are citing fewer papers. Is science becoming more elite or more democratic?