Readers and researchers were annotating texts long before the invention of the printing press. While annotating texts has been relatively easy for centuries thanks to the margins of paper texts, annotating digital items remains difficult. This is an odd quirk of digital content distribution, since the potential for capturing and sharing annotations in a digital environment make notations potentially so much more valuable.

Thinking back to the foundation of the World Wide Web, annotation was actually a critical component of what Sir Tim Berners-Lee conceived of as an interconnected store of research documents for CERN. In fact, one of the examples in Berners-Lee’s 1989 paper describing the World Wide Web — “Mesh” as he termed it at the time; the term “Web” wouldn’t come until a few years later — was an annotation about a comment related to a paper by Doug Thompson. In his chart of how documents relate, Berners-Lee described objects that refer to other objects, some of which are comments about other documents. Later in the paper, one of the “clear practical requirements” of the new system would be that, “[o]ne must also be able to annotate links, as well as nodes, privately.”

Annotation was almost an important feature of the first widely distributed web browser, Mosaic, developed by Marc Andreessen and Eric Bina who worked at the time at the University of Illinois National Center for Supercomputing Applications. Andreessen notes this in a blog post about his investment in Rap Genius, an annotation service for rap lyrics (which has much larger ambitions). In a 1993 email to the www-talk list, Andreessen asked if anyone was willing to alpha test the annotation features of Mosaic v1.1. The Web-browsing client had support for “very simple” annotation. On one of the Rap Genius post annotations, Andreessen noted that the challenge at the time was the need to scale the server support to handle the annotations. At the time, Andreessen and Bina approached the National Science Foundation (NSF) to seek further development of Mosaic and the annotation features, but, according to Andreesson, NSF “decided the project had no justifiable technical merit.” When Andreesson and Jim Clarke took Mosaic and commercialized it as Netscape, annotation fell off the development roadmap and was relegated to the back-burner of nice-to-have Web services. A few start-ups tried to push forward the annotation service, notably Third Voice, which failed in 2001 after criticism it was “defacing websites.” Another notable service for posting and sharing notes on websites was Fleck.com, which despite substantial angel funding, a patent, and some positive publicity, closed up shop in 2008, with its domain sold in 2010. But work and tools continued to be developed. Right now, there are some 17 web annotation services noted on Wikipedia, but several are missing, and it contains a few services that, strictly speaking, aren’t online annotation systems, but rather are bookmarking services.

A few weeks ago, more than 100 technologists interested in digital annotation gathered in San Francisco for the iAnnotate meeting, organized by Dan Whaley at hypothes.is with support from the Andrew W. Mellon Foundation. The meeting provided an opportunity for those interested in digital annotation to discuss technical interoperability, how various services are working, provide demos of new services, and discuss how annotation can be used in different contexts. One particular topic of interest has been the W3C Open Annotation Collaboration work. That group has produced a variety of interesting pilots, an Open Annotation Data Model, many publications and several other advances in annotation systems. Many of the new service providers talked about their services, including Domeo, Maphub, Pelagios,  Authorea, dotdotdot, and the iAnnotate host hypothes.is who recently launched an alpha version of their software. Several of the other speakers touched on annotation of data sets and annotations in scholarly papers. The meeting included a variety of break-out sessions and discussion groups on a variety of topics. Much of this was captured in notes and videos, which are (or will be) posted to the iAnnotate agenda page.

There are significant challenges to digital annotation systems, particularly shared annotation systems. While server space was an issue with Mosaic in the 1990s, this isn’t the critical problem any longer, although at scale an annotation system does require some significant hardware support. The real problem associated with public sharing of annotations is getting the model to work across different devices and systems. Locating a reference point is also a particular challenge when working  with reflowable text. In such a context, referring to page 164 doesn’t mean anything, because page numbering often isn’t used and, even if it were, one could size up the font to such a large scale that only three words might appear on a “page”. Similarly, one can’t rely on specific character count within the file, since that could change with minor editorial corrections. Creating a hash string of text characters before and after a reference point also has problems, since text could be repeated (as in song lyrics, or a recurring dream sequence). Also matching the point in a text between the first edition of a book and its annotated, fourth version, for example, creates tremendous matching problems, since there isn’t a widely adopted work identifier to tie together various manifestations of a work. This problem expands as one moves away from simply annotating text to comparing different media expressions. Those who remember showing up at a literature class with a different edition of a book than the teacher was using can relate to these challenges. NISO has a working group that is exploring standards to address the location problem, as a jumping off point for work on this topic, which we hope will feed into the IDPF EPUB3 specification.

Other non-technical problems arise that need to be addressed, such as copyright concerns and privacy. If an annotation system allows the capture of a selection of the referenced text, one might be able to collect all of those disparate snippets to recreate the work in its entirety. Realistically, this is among the least likely of pirate scenarios, but some publishers who have engaged in these discussions have noted it with concern.

The question of sharing is also fraught with complexity. How can one choose to share their annotations selectively — only for some works but not others, or only with some people and not others, or only in certain situations or circumstances? A book club may only want to share their annotations of the work they are discussing with each other. Similarly, users of the system trust their annotations won’t disappear if a service is sold or goes out of business. There currently isn’t a standard format for annotation exports and imports that could be used in such a scenario. As with all online services, the traditional problems of user identification and ID management is another challenge, but one that like the rest of the internet is waiting for a better ID management service.

For all the potential problems, the opportunity that exists within this one aspect of scholarly publishing to advance understanding of science is vast. John Perry Barlow, who introduced the second day of the iAnnotate meeting (he begins speaking about 2:40 into video), said it well, describing annotations as a critical element of the process by which “we grow, adjust, and expand the paradigm of what is known and which helps propel science forward.”

The scholarly publishing community needs to focus more attention on the new annotation services and models being developed, since it is the scholars who are the most likely users of — and the ones likely to obtain the most value from — digital annotation services. In some ways, this is part of the functionality that Mendeley provides and one element that makes the service valuable for its members — and valuable to Elsevier, who recently acquired Mendeley. Quality services along these lines — which replicate the traditional tools built up around working with print on paper — are something users are longing for in this new digital environment. Such functionality may finally be on the horizon for the growing community of readers of digital text.

Todd A Carpenter

Todd A Carpenter

Todd Carpenter is Executive Director of the National Information Standards Organization (NISO). He additionally serves in a number of leadership roles of a variety of organizations, including as Chair of the ISO Technical Subcommittee on Identification & Description (ISO TC46/SC9), founding partner of the Coalition for Seamless Access, Past President of FORCE11, Treasurer of the Book Industry Study Group (BISG), and a Director of the Foundation of the Baltimore County Public Library. He also previously served as Treasurer of SSP.

Discussion

17 Thoughts on "iAnnotate — Whatever Happened to the Web as an Annotation System?"

Blog commenting is a form of annotation that has been very successful. I can quote or otherwise refer to the text or feature I am commenting on. People commenting on my comment can do likewise and so on. The result is basically a scrollable string of comments, perhaps with a little nesting as here in the Kitchen.

The problem arises when we want an annotation system that puts the comments visually close to the text being commented on. Anchoring is a technical problem but the real problem is the complex structure (or topology) of the reference system itself. This is something I have studied for many years. At its simplest it is a tree structure which I call the issue tree. Sentences referring to sentences referring to sentences in a branching array that grows exponentially.

But it gets much more complicated because often we are not commenting on a specific sentence but rather on some aspect or implication of a body of the text. This body may be small, such as a paragraph, or even the whole text. And this is true not merely of the original text but of the comments on the comments.

It is worth looking at blog comments just to see their referencial structure. I sometimes think the only way to properly visualize this structure is with 3D fly-through navigation. It is after all a three dimensional structure so any 2D projection will be very messy and confusing. (A structure is 3D if you cannot draw a 2D version without a lot of crossing lines.) Is anyone working on 3D annotation?

In any case the basic challenge is the complexity of human reasoning. Lines of thought are not simple. That complex structure is what annotation is trying to capture.

While yes, blog posting is a form of anottation, it misse a few elements. As you note the specificity of which thing one is referring is one aspect of this. For example, this comment is a repsonse to yours, but I might also want to refer to something that Rob Virkar-Yates says below, but because of the nature of the comment/response system, my response is strictly tied to your comment, not both. A general comment, might appear to refer back to only the entire post.

A bigger problem is that these annotations are only connecting those annotations that are made here on this site, not things that people might say using Mendeley, hypothes.is or other service. There’s no way, at the moment to aggregate them. Interestingly, several of the services are building in APIs to make possible annotation sharing across open systems. These only work, of course, where there is an open exchange system. Many annotation systems, notably the one built into Amazon Kindle, are not open.

Some of the services I note have good visualization tools, which avoid the “messy, page defacing” appearance of some systems. I encourage you to take a look at the hypothes.is alpha as one example of something that might fit what you’re describing.

The point is that integrating comments or annotations involves a complex, albeit specific, 3D network. Any approach that fails to recognize this topology is unlikely to be useful, including standards. I do not know what your standards are looking for but they need to be based on the underlying logic.

The same is true of citation networks by the way, because citations are annotations. If my article cites yours then I am saying there is a connection which is a social annotation to your article. This is also a 3D network.

If any of these annotation systems includes 3D visualization perhaps you can point to a URL. Simple visualization of links is not sufficient.

I went to the hypothes.is website and found nothing on an alpha version nor on visualization. I did find this intro which seems hyperbolic, to say the least:

“If wherever we encountered new information, sentence by sentence, frame by frame, we could easily know the best thinking on it.

If we had confidence that this represented the combined wisdom of the most informed people—not as anointed by editors, but as weighed over time by our peers, objectively, statistically and transparently.

If this created a powerful incentive for people to ensure that their works met a higher standard, and made it perceptibly harder to spread information that didn’t meet that standard.

These goals are possible with today’s technologies.

They are the objectives of Hypothes.is.”

Best thinking (easily)? Combined wisdom? Higher standard? Not likely.

Here is the link to the alpha: http://hypothes.is/alpha The current version is a Chrome plug in.

By visualization, the way that the UI of the annotation tool works is a heat map scroll bar.

Open Annotation is “3D Annotation” in the sense I think you are using in that it is “Web” annotation (meaning uses hyperlinks) with annotations have *or more* resources as targets. Multiplicity of bodies is even supported (an annotation can say that these X things are about these Y other things).

One severe issue with tools is that the most natural ways for humans to interact with a text might not provide enough information for the annotation system to deduce an unambiguous reference. As you point out, one might not exist at all in the case that the the comment is about “some aspect or implication of a body of the text”.

At Hypothesis, we take this into account by capturing multiple references at different granularities whenever comments are made. In our case, these references are currently properly nested (page level, paragraph level, content level), but we have many open issues still around multiple references, such as expanding annotations to target multiple, disparate, things.

So, I hope I understood you correctly. It would indeed be tragic for any Web annotation system to ignore the fact that annotation often needs multiple references. But the interface may actually a harder problem than the model.

If you would like to visualize our annotations using some weighted graph layout in 3D space that would be nifty. Currently, it would only show that replies also reference the page on which the original comment was made.

Very interesting Randall. Comments on comments need to be linked to the comments they refer to, but I am not sure your system does that. Go to Google Images and search on citation network visualization. As I said the mark of 3D is many lines crossing. It is very hard to see the underlying structure in 2D.

It does, but only to the whole comment and only when the interaction started from a “reply” button. My point was to clarify that the model is 3D even if the *current* tool only lets you add 2D structure (digraph pointing oriented backward in time) to the graph.

Mapping the 3D structure of human reasoning still lies before us. I do not expect to see it in my lifetime but it is nice to think about how much we still do not understand.

The lack of really good annotation tools is likely one of the main reasons for the persistence of the pdf. I know a great number of researchers who still print out the paper, scribble in the margins and put that annotated printed version into a file cabinet.

Curious to know if MacMillan/NPG’s ReadCube (http://www.readcube.com) was discussed at all at the meeting as it seems to have some interesting annotation features.

If the PDF is not locked they can easily do this in Acrobat (not Acrobat Reader which is free).

Good journals have one big advantage for an annotation system: the copy of record, which never changes. Unlike a newspaper, there are no editions of articles, only errata. Once an article is published, for example, paragraph 7 will be paragraph 7 forever. Devise a proper system for defining anchors, and journals will be set for annotation.

We need to distinguish personal annotation from social annotation. Much of Todd’s article seems to refer to the latter (note that this sentence refers to the entire article). Social annotation requires far more than placing anchors.

The case I worked on was integrating public comments into a proposed regulation. As things stand one has a hundred page proposal and a stack of comments. Placing anchors is no problem. The problem is many comments do not relate to a specific place in the proposal text so there is no place to put the anchor. This is not a computer problem. It has to do with the nature of human reasoning.

Access control is perhaps another issue that comes into play for any system. As journal commenting systems have proven, most researchers are hesitant, if not downright hostile, to the idea of publicly commenting on the works of others. Most researchers do discuss works in smaller, trusted, private groups like within a lab or at a journal club. Useful annotation systems may need to offer the ability to make comments completely personal, privately shared or public.

Your point was exactly the one that was identified by two groups that NISO brought together to discuss potential standards for annotation. Out of those two Mellon Foundation-funded meetings, developed the current NISO work project on annotation, which is seeking to develop a means to do exactly this thing. Here is a link to more information about that project: http://www.niso.org/workrooms/annotation

Content platforms that serve up scholarly content can also have annotation functionality built into them. Our content platform, in its iteration for the MacGraw Hill Access Engineering Library (http://accessengineeringlibrary.com/), enables users to highlight portions of the text and make notes related to that portion, annotate graphs using pins with associated notes and to export annotations to a CSV file. We are currently working with other clients on content-specific variants of this functionality.

Its worth noting that platform specific annotation tools are, in my experience, popular with publishers and librarians but often record low levels of actual usage.

Comments are closed.