At this years annual STM Week in London, there was a strong focus on collaboration and shared infrastructure. I bunked off one of the days to check out the All Things Coko meeting. Is this the start of a new way to look at scholarly publishing technology?
At the end of February, Nancy Roberts of Business Inclusivity and I co-organized a workshop on diversity for the Researcher to Reader conference. In this post I explore my motivations for doing so and talk about why I think so few men seem comfortable participating in these discussions.
At the Researcher to Reader conference, a volunteer project was launched to define a new suite of indicators to help researchers judge publishers, rather than the other way around.
Hindawi recently announced they would no longer be members of the STM association, citing the trade association’s ‘overwhelming focus on protecting business models of the past’. What does this mean for Hindawi and for the industry?
Of all the gin joints in all the world, a smokey little dive bar in Frankfurt became the focal point of the STM publishing social scene. How on earth did that happen? More importantly, is there a wider significance to its story?
Last Monday, the British public defied expectations and narrowly voted to leave the European Union. What will be the consequences for the academic community and scholarly communication. Here we look at just two of the potential issues.
It was a little while back now that a controversial blogger attacked one or more of the authors of the Scholarly Kitchen for being former academics, questioning whether such people should be working in publishing. In today’s post, Phill Jones argues that such rhetoric contributes to a stigma that is damaging to the health of academia.
Whether or not you attended this years APE (Academic Publishing in Europe) conference yourself, find out what three of the Scholarly Kitchen chefs thought of the meeting – our overall impressions and key take-home messages.
It’s easy to think that scientific ethics are straightforward and that results that aren’t robust end up in the literature because some people give into the temptation to cheat. The reality is more complex. If you were in this situation, what would you do?
Criticisms of altmetrics often seem to be equally applicable to other forms of research assessment, like the Impact Factor. Phill Jones suggests that is not because of a fundamental opposition to altmetrics but a fear that it will suffer the same pitfalls. The solution is to engage more with a somewhat neglected set of stakeholders; Informaticians.