Project COUNTER’s new Code of Practice was published last month. Those publishers who wish to remain compliant will need to adopt the revised standards within the year or lose their compliant status by the end of August 2009.
Like most standards, the 3rd release is longer and more detailed than its predecessor. It is a response to the changes in networked resources since the 2nd release and to the nuanced differences between online products. It is COUNTER’s third attempt to catch up with the status quo.
Changing standards is not an easy feat. As a former board member of COUNTER, getting things done in a group representing diverse (and sometimes entrenched) interests requires concerted effort and consensus building. That these groups of publishers, vendors, and librarians don’t throw up their hands in exasperation and go off in their own directions is a real testament to how the group is run.
The 3rd release attempts to address ongoing issues related to how one measures something as simple as an article download. Separating federated searches — that is a single user-generated search blasted out to several related databases simultaneously — will need to be identified in the reports. If your library runs a LOCKSS machine that routinely downloads and archives every single published article, these will need to be omitted. Automated robots and crawlers need to be ignored as well. What we are seeing is the development of an accounting system that addresses the intentions and values we put on each act of downloading. This puts the onus on COUNTER to keep current its list of known federated search engines and Internet robots. It may also set up a false perception of trust on the part of the customer, considering the reaction speed of COUNTER.
Usage reports will also need to be produced in tagged XML in response to another set of standards (SUSHI), which allows reports to be harvested by library management systems and the like. Usage data could be more easily incorporated with the kinds of systems that librarians use to manage their journals and databases, and permit calculations (like cost per article download) to be generated on the fly. Archival products (like a journal backfile) will need to be separated from current subscription products, permitting a more transparent view of which files are generating use.
COUNTER provides a set of international, extendible Codes of Practice that allow the usage of online information products and services to be measured in a credible, consistent and compatible way using vendor-generated data.
The development of a set of standards, “that allow the usage of online information products and services to be measured in a credible, consistent and compatible way” would be much easier if all online products were identical. Creating a set of standards that deal with a diversity of products, running on a panoply of systems, and fulfilling different user needs sets up an impossible challenge for this group if customers believe that their COUNTER usage data are indeed “credible, consistent and compatible.” Unless publisher mergers leave us with a single database product (imagine an über ScienceDirect), this ideal simply cannot be, although I do not want to dismiss the work of this group out of hand. Developing usage products that are pretty credible, mostly consistent, and somewhat but not always compatible should be good enough. There is no need for a non-profit organization to promote an image of absolutes.
Statistics always have the allure of presenting themselves as being overly precise, when the meaning of those data can be somewhat messy.