In today’s post, Alice Meadows interviews Jodi Schneider of the University of Illinois Urbana-Champaign about the work she’s leading to reduce the inadvertent spread of retracted research.
The idea of “reanalysis” needs to be rethought, if recent examples are any indication of what this trend could do to science.
A recent “Slate” article shows what can go wrong when we talk about “peer review” as if we all share a common definition about an unchanging phenomenon.
A ruse to self-review and self-recommend papers for publication leads to 60 retractions. Can we find a way to prevent this kind of identity fraud and its consequences?
At a time when more research articles are more readily available to more readers globally than ever before, it’s crucial we are confident that those papers meet the highest standards and, that on those occasions where they don’t, there is a sound system in place to revise or retract them. So what can we do to make the publishing process more sound?
Retracting a paper identifying a link between climate change deniers and conspiracy theorists provokes more conspiracy theories, but it turns out the real impetus for retraction is disappointingly parochial and explicable.
They Might Be Giants offer a musical retraction notice.
The digital world increases the need to distinguish good information from bad, and despite multiple approaches, we still have a patchwork approach — but more attention is being paid.
In a story of the modern age of fraud, spoofers find their way into peer-review rosters, reviewing their own papers or those of their friends/competitors.
An interview with Ivan Oransky — physician, journalist, and co-founder of “Retraction Watch.”