When three Scholarly Kitchen Chefs all found ourselves attending the APE 2016 meeting in Berlin recently, it was somewhat inevitable that we would feel the need to blog about our experiences – what we learned, memorable sessions and speakers, our overall impressions. This isn’t intended as a complete recap, but we hope that – whether you attended or not – it will give you a good sense of what we all agreed was an interesting and useful meeting.
First up: Charlie Rapple…
Some people think conference tweeting is at best a distraction and at worse a discourtesy to the speaker, akin to passing notes or talking during class. Those people probably rejoiced in Twitter’s outage during the APE conference last week. For my part, reverting to other forms of note-taking served only to highlight the useful role that Twitter plays.
- Better notes. The need to edit for length, in real time, means I focus more intently on the precise points worth taking away, cutting through the noise and leaving me with fewer notes, each more concise. This immediately makes them more memorable, and also makes them easy to revisit later on than the streams of consciousness that I might otherwise capture.
- Shared notes, and clarifications. I don’t always hear or understand everything that is said. The way other tweeters summarize talks, questions and comments can be invaluable in helping clarify or capture points I’d otherwise have missed.
- Augmented discussion. Yes, the backchannel is effectively digital note-passing, but it’s usually made up of smart people adding their own experiences of the topic, often with facts, data and links that are useful to follow up later; whether supporting or challenging the speaker’s case, this is often a useful enrichment of whatever discussion is taking place.
- Connections. Particularly if I’m at a conference for the first time (as, indeed, I was at APE), the Twitter channel is a useful place to “meet” people – even if it’s not used to organize a “tweet-up”, it’s still a useful forum in which to learn some names, start conversations, and provide an ice breaker when you spot those people in real life during the breaks.
Twitter’s sustained downtime robbed me of opportunities for shared notes, augmented discussion and virtual connections, but I did at least attempt to retain my discipline as far as note-taking was concerned. Here then are my tweet-length highlights from APE:
- Workflow and infrastructure (or “research mechanics” in Daniel Hook’s phrase) seemed the topics to which speakers most commonly returned
- “We should seek to move to a much broader evaluation of a researchers’ expertise” (Stuart Taylor) — how do we monitor and measure talks, mentoring, reviewing etc
- On the importance of publishing research data: “We should be publishing supplementary articles, not supplementary data” (Barend Mons)
- “Ingenuity” is required to flip from subscription to open access models, but must be based on a fuller understanding of institutions’ current expenditures than we currently have (Ralf Schimmer)
- Publishers have focused on the technical aspects of delivering digital content, rather than the bigger picture of how digital (social) has changed researcher behavior / needs (Hannfried von Hindenburg)
- “Do we as publishers want to support the full interaction around the knowledge, or do we want that happening somewhere else?” (John Sack)
- Individual publishers can’t create reputation platforms but a shared cross-publisher service could help us meet researchers’ needs and compete with those otherwise intent on disrupting publishing (me – Charlie Rapple)
Next, Phill Jones on his takeaways:
The opening keynote came from Prof. Barend Mons, of Leiden University and Chair of the European commission’s high level expert group for the European Open Science Cloud. The focus of his talk was essentially data publishing but presented through the lens of science as a social, interconnected endeavor. Mons points out that science has gotten increasingly collaborative, with large amounts of data to be curated and shared. The answer, Mons suggests is data-led scientific communication where each research output follows highly ordered structures, creating a massive relational database — the explicitome. Data should be FAIR (Findable, Accessible, Interoperable, and Reproducible), with written narrative and qualitative interpretation coming later — the data as publication, and the article as supplement.
The talk was an excellent distillation of the arguments put forth by the open science and data science communities on both why data sharing is important and how it might work in principle. When the videos of the APE presentations are released, I highly recommend giving it a watch if you get the chance. There were questions, of course. Perhaps most pointed was a comment on the fact that the European Open Science Cloud is, well, European and not global. If we really are moving towards an international interconnected web of researchers and computers, don’t we need a truly global approach? I asked about whether a completely structured approach is feasible, given the exploratory nature of science. I wonder if we risk restricting truly novel science, if we don’t build in adequate flexibility.
On the subject of transformative changes, Ralf Schimmer from the Max Planck Institute for Chemistry reported on the Berlin 12 conference, this year’s annual check-up on progress towards implementing the Berlin Open Access declaration. Schimmer minced no words in setting out the aim to convert the publishing industry to open access. He went on to describe the work that they had done to explore the economic feasibility of such a transition.
There have been questions about the economic feasibility of OA since the beginning of the movement, including from groups very supportive of OA, such as the Finch Report. Schimmer acknowledged concerns but referred to some analyses of the scale of the issue as ‘naive’. He went on to present a somewhat oversimplified economic analysis of the business of publishing which led some in the room to wonder why publishers weren’t involved in the discussion. Whether or not it was a good idea to start the Berlin conferences without publishers, I think that a strong argument can be made that now’s the time for a more collaborative approach to understanding to what degree a transition is feasible, and to map out a practical pathway.
There was a lot of talk about both collaboration and infrastructure at APE this year. Alice Meadows chaired a session on publisher efforts in this area. Geoffrey Bilder of Crossref was on the panel, standing in for Ginny Hendricks. Bilder was in many ways, speaking from the perspective of the open science community. He helpfully made the point that much of the work done to create the infrastructure that enables scholarly collaboration has been undertaken and funded by publishers. Bilder helped set a positive collaborative tone for the session and the rest of the conference.
Notably, the wake-up session on the second day was Can the ‘Academic Sharing Economy’ add Value to the Scholarly Ecosystem? The panel built on many of the themes of the previous day and there emerged a generally sense of agreement that publishers should look for ways to support scholarly collaboration as much as possible. The panel were asked if publishers had ‘missed a trick’ in concentrating on publishing rather than ‘scholarly communication’. The answer was a resounding ‘perhaps’. Grace Baynes of Springer Nature thought that they might have, given that there is demand for these services. On the other hand, Kent Anderson said it was telling that scholarly sharing had come from outside the industry from ‘insurgent’ players. He went on to warn that for publishers and librarians to continue to be able to demonstrate the value that they bring, scholarly sharing has to be done in a trackable way.
The highlight of the conference for me came near the end and was the discussion on research ethics. Although the session was quite long, it really explored many of the facets of this emerging area of concern. There are two reasons why a researcher might publish work that is in error. The one we always think about is fraud, which Chris Graf of COPE discussed, but there is also a more insidious effect that is less well understood. Bernd Pulverer, Chief Editor at the EMBO Journal talked about the sort of sub-fraud corner cutting that seems to be on the rise. Issues like selective data reporting, a focus on ‘story telling’ in articles, image manipulation, and poor use of statistics all blur the lines between fraud, mistakes, and ‘beautification’. As Pulverer said ‘We can’t solve pathological fraud, but we can help reduce the slippery slope’ through carefully monitoring and raising editorial standards.
Finally, Alice Meadows’ thoughts:
Like Charlie, I rely heavily on live tweeting at conferences in lieu of note-taking, so the Twitter outage on day 1 left me more reliant on my memory than I’d like. But, in addition to what Charlie and Phill have already mentioned, some other highlights from the meeting that have stayed with me even when Twitter-less include:
- The changing face – and increasing importance — of social networks, including scholarly networks, in building scholarly reputations. David Nicholas of CIBER Research gave a fascinating presentation based on the results of their recent survey, starting with a quote from Becher: “The main currency for the scholar is not power as for a politician, nor wealth as for a businessman, but reputation.” At present, for scholars, that reputation is built overwhelmingly on just one activity — research — and, in fact, on just one output of that activity (publication in high-impact factor, peer reviewed papers) and just one measurement of that output (citations). It’s a system that, as Nicholas noted, benefits publishers, but is also the basis for most (or all?) the current SSNs (aka reputation platforms). You can see his slides here – lots of food for thought…
- Repeated acknowledgement of the importance of a sound research infrastructure and the need for all players to feel ownership of it. For example, Chris Graf, speaking in the ethics session, highlighted the fact that COPE (of which he is Co-Vice Chair) is currently missing representation from research institutions — needed in order to make real progress on reducing/eliminating fraud. Similarly, Geoffrey Bilder flagged the need for funders and research institutions to invest more in the research infrastructure, noting that publishers, in particular, have invested heavily in it.
- Following on from this, the importance of all of us taking a more collective ownership of scholarly communications more generally. One of my favorite quotes from the meeting was from Ginny Barbour, Chair of COPE (courtesy of Chris Graf again): “We need a culture of responsibility for the integrity of the literature…It’s not just the job of editors.” And I would personally stress that a culture of responsibility is not the same as a culture of blame. While there are clearly individuals and organizations who are out to game the system, there’s also still a lack of training and education of early career researchers, as well as cultural differences at play here. Like Phill, I was particularly struck by Bernd Pulverer’s comments about our collective responsibility to tackle the slippery slope
Overall I was very impressed by the quality of speakers and sessions; this was my third APE and by far the best yet. However, I do have to note that there is still a dearth of women speakers at this meeting (more than other industry conferences). In particular, there were just three women speakers on the whole of the first day — one keynote and one speaker plus me on my own panel (unfortunately two other women dropped out). There were no fewer than five keynotes on day one — all men — and the final session that day was an all male panel. The gender balance on the second day was better (eight women, 13 men by my count), but it was still far from representative of the female/male ratio in scholarly publishing. However, it’s certainly an improvement on the first APE meeting I attended several years ago, which had just one woman speaker, so here’s hoping that next year the organizers will do more to ensure parity.
Discussion
15 Thoughts on ""Research Mechanics", OA, Ethics, and More: Three Chefs Musings on APE 2016"
Regarding the slippery ethical slope, I recently published a taxonomy of 15 types of what I call “funding-induced biases” in science. It includes a snapshot of the present research on each type. The types range from funding to publication and publicity. This may be helpful in distinguishing different types of bias as there is a fair amount of confusion in the present discussions.
See http://f1000research.com/articles/4-886/v1
David, as an aside maybe you could discuss your experience with this F1000 Research article. In your article, you submitted your manuscript and lined up reviewers, who returned unfavorable reviews. You then wrote a rebuttal, concluding “We therefore find no reason to revise our article.” And there it remains forevermore, published as is, but with red blemishes? The reviews and approvals (or not) appear to have been mediated between the authors and the reviewers? The journal instructions describe the role of the “editorial office” but that’s not the same as an editor who referees reviews and makes publication decisions.
In your publication, you apparently decided that working through tedious revisions in response to the reviewers criticisms wasn’t a good use of time? Published, albeit with two red “x” boxes indicating lack of approval by the reviewers is still published? Presumably you made a call that the uncertain time needed to mollify the reviewers wasn’t justified to remove the red “x” blemishes? (I initially thought the red “x” boxes just indicated graphics that didn’t load in my browser). Authors submitting manuscripts to more conventional reviews face the same decisions following unfavorable reviews, but if they chose not to revise that version never sees the light of day. But at F1000 Research, submitted=published, along with positive, indifferent, or negative reviews?
There’s room for improvement in conventional scholarly publishing, and the F1000 approach certainly dispenses with many conventions. What do you think of the F1000 Research approach?
Actually I am working on a revision in response to the second review. The first review I judged to be simply an attack by two journalists, which did not address our findings, so there was nothing to revise. This is explained in my response. I like the concept of publish then publicly review. It is very Web 2.0 in structure, that is blog like.
I can’t resist pointing out to Phill an example of a publishing model that puts data first and interpretive writing later in the order and structure of publishing. It is the model pioneered by historian Robert Darnton whose whole career as a historian of the book has been built on the rich archive of a Swiss printer in Neuchatel. His lifelong project has been to digitize and make the archive available and then build upon it layers of interpretive writing, from very specialized articles to methodological essays to monographs and even to popular trade books. This was the vision of a new kind of multilayered “book” that he elaborated in his classic “The New Age of the Book” (New York Review of Books, March 1999), which gave rise to both the Gutenberg-e project at Columbia University Press and the multipress ACLS Humanities E-Book project in the early 2000s.
I absolutely agree that we are moving to a landscape where academics will produce a very broad range of digital objects that will be linked to some kind of narrative. That may be something that’s written up front and then decorated with data and other objects or something written later. For historical and inertia reasons, that second approach may take a little longer.
What my concern is, and I admit I didn’t write it out at length due to space constraints, is that if we focus too heavily on structuring data, and insisting that data must follow certain formats, we run the risk of marginalising those that do truly novel work.
Is no one concerned about the drain on researcher’s time that all this multi-mode publishing takes?
In Darnton’s case, it is all part of his larger vision and lifelong strategy. But I agree that this may not be suitable for most scholars currently in the system we have now. Darnton has reaped huge rewards from this strategy, but this may work for only a handful of individuals right now.
Quite the opposite, David.
There are two answers to that question. The first is that the traditional system of scholarly publishing is very time consuming for researchers. Pre-formatting a submission to journal, lengthy and overly onerous submission systems built on outdated technology, multiple rounds of rejections and resubmission are a massive drain on researcher’s time. Much of the innovation in the space is intended to make communication of ideas faster and more streamlined.
The other aspect is the extra time it take to format and prepare data for sharing and reinterpretation. Again, it’s a problem that people are aware of, and if you happen to subscribe to Against the Grain, you can read mine and Mark Hahnel’s thoughts on that question in depth in the next issue, but the basic sense of it is that these systems must be designed to be rapid and intuitive. In my experience, some solutions are better than other at that.
I am not encouraged by your response, Phill. Your first step seems to be doing away with the present system of scholarly publishing. That is utopian at best, more likely impossible. As for data, the effort required to make it usable by others is not a technology issue; it is human labor.
But I was really referring to your idea of researchers producing a “very broad range of digital objects” which presumably go well beyond simple journal articles and data. Going from producing two things to producing a very broad range of things sounds like a lot of new work. You have not addressed this.
I’ve would never say that we do away with the present system of scholarly publishing. I’m sorry if I gave you that impression but that’s really isn’t my opinion at all.
I’m sorry if I wasn’t clear. Researchers do a lot more than write articles and take data. They write reports, produce presentations, serve governments and companies as advisors, write reviews, write computer code, design experimental protocols, teach students, make videos, write blogs, give interviews to the media, design schematics for 3D printable dinosaurs, the list goes on. The question is, which of the many things that a researcher does with their time should be counted as an output and therefore rewarded. It’s not about creating more work, it’s about rewarding people for their contributions.
Okay, Phill, so now you are talking about changing the reward structure of the academic community, which is several orders of magnitude larger than the scholarly publishing community, regarding peripheral publications. This seems to be a fairly common theme here at TSK: if only they would value this, that and the other, which they do not. In my view publishers are in no position to create such changes. The system is solid as is.
There are several canards and non-sequiturs in your comments, but overall I think researchers will come to regret opening the door to granular, continuous tracking of every step of their activities by funders. The multi-mode and continuous “brave new world” will increase the burden on researchers and will allow much more accountability. That may be a good thing, it may be a bad thing, but if you’re a researcher it will certainly be more onerous and invasive than the current regime of periodic publication however imperfect.
I really don’t know what you mean by canards and non-sequiturs.
I certainly understand the concern about making overly onerous burdens for researchers. That’s why we have to make sure we design things to be easy and efficient. The concern about being invasive, I have to disagree. I’d say that the current questions over reproducibility are evidence that academia might benefit from more accountability. There were a number of talks at APE about research ethics and there is a growing feeling that there’s an erosion of research and reporting standards that’s contributing to research being less reproducible than it could be.
Your earlier post was focused on “convenience for researchers”. Now you’re (appropriately) using “quality of research” as the benchmark. In a magical world both would be possible at the same time, but in reality these objectives are frequently in irreconcilable conflict.
I find it interesting that publishers are often blamed for running burdensome processes and, in almost the same breath, excoriated for failing to have sufficient process to catch ethical and other problems.
Of course the plumbing should be as efficient as possible, but no magical modern technology or open source infrastructure is going to change the dynamic that what is good for research publishing is sometimes burdensome for individual researchers, and that they will gripe about it.