#PeerRevWk16
#PeerRevWk16

Almost exactly three years ago, The Scholarly Kitchen posted a podcast with Peter Brantley about the then relatively new start-up, Hypothes.is. Find out what the organization is up to now, and why they believe in the power of annotation as a form of peer review, in this Peer Review Week interview with Maryann Martone, Director of Biosciences and Scholarly Communications.

Please tell us about Hypothes.is — what do you do and why do you do it?

Hypothes.is is a nonprofit technology company that is bringing an open, interoperable annotation layer to the web. We believe this new layer should be able to be brought to any document, in any format within any environment, and that that anyone should be able to be a provider or consumer of annotations.

Publishers tend to associate peer review with journal articles and books, but of course annotation is also an important form of review. When and why do you think it’s most valuable?

Annotation is a means of providing feedback at all stages of the publication process. We are already familiar with annotations and commenting during the writing process. We also do something similar during peer review. Many of us are familiar with writing and reading long form reviews for journal articles, but these new digital annotations have characteristics that are superior for peer review, during and after publication.

Annotations are precise. Because annotations are at the location they describe, it’s always clear what an annotation is talking about. Of course, overall long form responses that refer to a whole article can also be made with annotations, so they represent a superset of these two paradigms. It’s quicker to annotate. Because you don’t have to cut and paste the context of the article into your long form review, or cite page numbers and paragraphs, annotations have an easy immediacy to them. They lend themselves to a wider range of observations. Reviewers may be more likely to remark on smaller things, like copyedits and other suggestions that can benefit the paper. The interoperable nature of annotations and their coming adoption in a wide range of workflows means that an annotated review is much more likely to be associated with an account where you can continue to access and refer back to notes that you’ve made, to use the same tags that you do elsewhere, and to search through them as part of a complete corpus of your scholarly notes.

Annotations can easily facilitate a variety of review paradigms, like open reviews, and can enable new kinds of workflows where reviewers, authors and editors can participate in conversations with each other right on the document itself – potentially even subsequently exposing those conversations publicly as a permanent record post-publication.

How and why did Hypothes.is get involved in the community working group led by CASRAI, F1000 Research, and ORCID to create a standard set of terms to enable peer review citation?

Scholarly work takes many forms and scholars themselves perform many roles. Currently, we give credit to only one form: papers, and only one role: author. The other activities are lumped into service: we have to do it, but we get no formal recognition for it. But those of us active in advancing scholarly communications all believe that the full range of scholarly activity, e.g., peer review, editing, curation, commenting, and annotating should be credited. The community effort to define and expand the roles that are credited is an essential piece of the puzzle.

The group was established in part to enable better recognition for all forms of peer review – do you think it will help annotation to be more widely recognized?

Yes. Annotation is a form of peer review. But annotation is also a means to add value to an existing work, e.g., by adding additional knowledge. Annotation is also an expression of interest in a work, so we think that annotations should be counted as an altmetric.

What other barriers are there to ensuring that researchers get recognized for annotation and how can they be overcome?

There are a few things that can help annotations become more recognized.

  • More annotations. The more annotations are used for the full range of scholarly activities, the more people will become accustomed to them, and to recognize the importance of their role.
  • Being able to get DOIs for annotations. If annotations can be cited as scholarly objects, then people will be more comfortable creating them, because they’ll know they can cite them.
  • And, of course, making sure that annotations are tied to ORCIDs and other identifiers for researchers, so that they are discoverable and relevant to scholars as a permanent part of their record.

What’s on the horizon for Hypothes.is – where do you want to be five years from now?

Our vision is to jump start conversations across many sectors, from scholars and researchers to educators, civil servants, journalists and others. We think this flexible, open, standard layer can serve a wide spectrum of use cases. In the next five years, we hope that can make good progress in these different domains. Within the scholarly community, a goal is that all scholarly works will be “shipped” with an open, interoperable web-based annotation capability. This is the objective of the Annotating All Knowledge Coalition. But it’s the transformational potential of this capability in facilitating new kinds of review, collaborative research and note-taking, journal clubs, personal notes, as well as machine generated annotations that is the higher prize. Ultimately, we believe this can help accelerate the scientific process, address issues such as reproducibility, and create a durable long lasting and permanent record of insights, critique, review, citation and more.

Alice Meadows

Alice Meadows

Alice is Director of Community Engagement & Support for ORCID, responsible for communicating the why, what, and how of ORCID for researchers and their organizations. Alice is on the Board of Directors for the Society for Scholarly Publishing and received the 2016 ALPSP Award for Contribution to Scholarly Publishing.

View All Posts by Alice Meadows

Discussion

13 Thoughts on "Annotations as Peer Review: An Interview with Maryann Martone of Hypothes.is"

I’ll ask the obvious questions (as per Angela’s post on Monday):
What sort of credit should one receive for leaving a note on someone else’s paper? Who would grant that credit and what sort of reward would that credit lead to? For whom (somebody check my grammar here) is this sort of behavior enough of a priority that they would reward research funding or career advancement for doing it?

I distinguish between credit and reward. If I annotate a paper with my ORCID, it would become part of my scholarly profile. Who rewards or cares about that? Who rewards peer review? Who rewards conversations I have with my colleagues that help them work through a problem? It’s part of the service I do as a researcher. The number of times a simple comment by a colleague helped me in a particular moment to move my research forward is significant; it’s why we talk with people. Having these activities credited and part of a profile means if that activity is important in a particular context, there is a record of it.

What is the value though, of that record? Who will look at it, and what will they do with it?

the obvious analogy are Amazon reviews, no? You don’t get any credit for writing any individual review, but particularly helpful reviewers (Top1000, 100 on amazon) get both community recognition and in many cases also material rewards (free products to review). The analogy isn’t perfect, of course (e.g. obviously the Amazon system is tightly&centrally controlled), but I don’t think there’s any reason a large history of “micro” helpful services can’t be captured or recognized.
E.g. I find it entirely plausible for a committee to look at a scholars ORCID page and recognize their engagement with peers and contributions to the field via review-type comments without reading all of them. Of course it’s impossible to predict how this will play out in detail: will 10 high quality comments be at about the level of a publication? 100? To what degree will annotations be cited (see Crotty 2016, but note the rebuttal in Martone 2016)?
But we don’t really know this for data or software either, do we? Nor is it clear say, what the “value” of a monograph or chapter is versus a journal article.

In the recent Vox survey results (270 scientists), Peter Gower argues that researchers should get more credit for informal idea sharing as a way to fix some of our problems with peer review. An annotation layer is one way to do that effectively through the web. You can even anchor directly to the evidence (Hypothes.is allows direct linking: https://hyp.is/xjKz8oDtEea9potjUSxEZw/www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process.

It’s a reasonable argument — being a contributing member of the community is a positive thing and requires work. But I’ve yet to hear details on what exactly that “credit” is supposed to be, who is supposed to grant it, and what rewards are supposed to be offered for collecting that credit.

Let’s say I leave 100 really incisive comments on papers published by other researchers. Who is going to reward me for that, and what reward do I receive?

My question then, is who is the Amazon in this scenario? Amazon has a clear interest in having those reviews written (selling more products and making more money). Who has that same interest in having researchers make comments on the work of another researcher and what sort of rewards are they likely to offer?

If we’re talking about research funding or tenure, then high quality comments will never amount to the level of an original piece of research. The American Diabetes Association is unlikely to give a research grant to someone for critiquing the original research of others, they want to fund new original research. No one is going to be granted tenure solely because they’re a really good peer reviewer.

How granular is the value of engagement to a committee? If I’m a funder or on a tenure/hiring committee, yes I do want to see evidence that the candidate is a participating member of the community. I do want them to be performing peer review and contributing to the overall conversation. But that’s probably about as deep as I want to go. It’s hard enough to get people to read the actual papers. Asking them to read a candidate’s peer reviews and annotations is a non-starter. Do I need that level of detail or is a checkbox–this person did some peer reviews, did some annotations, enough?

And realistically, if you want to know if a candidate made significant contributions like this, wouldn’t letters of recommendation be more meaningful than trying to puzzle it out from comments left on someone else’s paper?

The interest seems not just in having researchers comment on others’ work, but in organizing how that happens. Currently this commenting already happens, but it happens unsystematically–in emails, in hallway conversations, etc–a lot of great ideas are shared and, unless someone decides they’re important enough to put into a letter to the editor, they probably don’t go anywhere else.

Who has an interest in organizing these comments? People who want to make the generation and dissemination of knowledge more efficient. People who want those hallway ideas recorded and more widely shared instead of lost. People who know that at some point in the future, someone is going to want to know how to cite a comment or annotation, and have decided we might as well figure that out now. The same type of people who put together PubPeer, the CRediT taxonomy, etc. And in this case, specifically, the funders listed on the Hypothes.is “About” page.

My sense is that the record of an individual’s compiled annotations is just a side benefit of the main goal, which is the systemization and organization of these annotations. First, note that Hypothes.is seems to aim to have pseudonymous commenters–they’re not identifiable, but their contributions are linked together. So in theory someone who consistently posts things voted as good by the community maybe will have their comments displayed first, for example, or it may be possible for you to read the rest of their comments on other works if you like what they said about this one. Trolls or people who consistently post rudely or poorly would be downvoted or banned. (At least I assume so).

Yes, tenure will likely not be granted based on annotations (especially since, at least in the case of Hypothes.is, people won’t be able to associate your annotation record with your identity). It may make people feel good to look back on the annotations they’ve shared and the feedback they’ve gotten, and that in itself is good, but that just seems like a side effect of the larger movement to organize and optimize something that’s already happening unorganized.

I agree completely–there would be great value in improved annotation tools, no doubt. And that’s enough reason to continue to pursue them.

I’m just less confident that a lot of the demands out there for “credit” for various parts of academic life are going to amount to much.

I’m unsure about what is new here. Some reviewers of journal articles and book manuscripts have for a long time provided annotations on the works they are reviewing, earlier in the form of written comments on printed articles and book manuscripts and later using the annotation tools provided in Microsoft Word and other editing systems. This is just another way of carrying out a peer review. Like David C., I have doubts about the value of giving any kind of formal credit for annotations that are independent of normal peer review. I comment all the time on articles published in The Chronicle of Higher Edication and InsideHigherEd. Why should I expect some kind of reward or recognition for such comments? Offering such comments/annotations is all part of what it means to be a member of a community. Do we need to be rewarded/recognized for everything we contribute to a community?

Comments are closed.