Source: Chucky Says "Cheese!"
Source: Chucky Says “Cheese!”

In yesterday’s posting I discussed the enduring ambiguity around the concept of Open Access “mandates,” many of which have nothing mandatory about them. One prominent example of how this ambiguity plays out and creates confusion is the ROARMAP registry. ROARMAP stands for Registry of Open Access Repositories Mandatory Archiving Policies — this despite the fact that a very large percentage of the policies it collects are not, in any meaningful sense of the term, mandates. In fact, a considerable number of them are not even policies — some constitute mere guidelines and others are only proposals (some of which are identified as such in the registry and some not).

In this posting I’ll provide examples of the pervasive errors and misrepresentations in ROARMAP, and then briefly discuss why I think they matter.

ROARMAP provides a list of several hundred policies or proposals of various kinds; at the top of the site’s main page these are broken down by rough category, all of which are characterized as “mandates”: “Institutional Mandates,” “Thesis Mandates,” “Proposed Sub-Institutional Mandates,” etc. The list itself is organized alphabetically by country, and for each entry links are provided to the relevant repository and to a page that offers “policy details.”

Click through to the “Policy Details” page for a few of these entries, however, and one of the problems with the registry quickly becomes apparent. Those who maintain it have not made fact-checking easy; in the case of a great many entries there is in fact no link at all to the text of the policy summarized on the page, and in many cases where a link is provided, the link is incorrect or dead. Some of these mistakes are certainly unintentional errors, and are neither surprising nor particularly troubling; the database appears to be a labor of love carried out by busy people with demanding jobs. Inadvertent errors and out-of-date content are both to be expected and can easily be fixed.

But ROARMAP is characterized by another form of pervasive error, and this one offers a bit more cause for concern, since it always has the same effect: to exaggerate the “mandatory” nature of the policy in question. This error consists in the mischaracterization of institutional OA statements by selective quotation.

For example, consider the case of Oregon State University. The ROARMAP entry quotes its OA “mandate” as follows:

An Open Access policy was unanimously approved by the Oregon State University Faculty Senate at the June 2013 meeting. The policy grants Oregon State University a non-exclusive license to exercise any and all rights under copyright relating to its faculty’s scholarly articles, in any medium, and to authorize others to do the same, provided that the articles are not sold for a profit. The policy further directs faculty to submit an electronic copy of the accepted (post-peer review, pre-typeset) manuscript of their articles to OSU Libraries for dissemination via its institutional repository.

That is the end of the extract provided by ROARMAP. But the policy itself goes on to say that “at the request of a Faculty member… (OSU) will waive application of the license for a particular article” — a very important qualifier that is left out of the ROARMAP summary. In practice, this means that an Oregon State author can opt out of the policy’s requirements at his or her discretion. In other words, the “mandate” is effectively optional, though one would not know that from reading the ROARMAP summary. A similar disconnect between the ROARMAP summary and the institution’s actual policy can be seen when comparing summaries with policy texts in the cases of the University of Hong Kong (summary vs. text), Connecticut College (summary vs. text), Trinity University (summary vs. text), Rollins College (summary vs. text), and the California Institute of Technology (summary vs. text). These are only a few of many such examples.

Interestingly, the most egregious disconnects between ROARMAP summaries and policy texts seem to occur when the summary was created by someone from ROARMAP; where the summary language or extract is credited to a representative of the institution in question, the text generally (though not always) includes either the entirety of the policy statement or a more accurate abstract of it.

To be sure, there are some entries in ROARMAP that do accurately reflect the non-mandatory nature of the policy in question. But this brings us back to the nagging question: why is the systematic exaggeration necessary? Where the policies (or guidelines) make participation in the program optional, why the insistence on referring to them as “mandates”?

I posed that question last week, by email, to Tim Brody at the University of Southampton, the contact person given on the ROARMAP “About” page. As of this writing, I’ve received no response — if an answer does eventually come, I’ll update this posting. (And I have no doubt that discussion of this question will come up in the comments.)

It does seem to be in the nature of OA advocacy to emphasize constantly the movement’s “dramatic growth” and its “inevitable” dominance of the scholarly communication sphere. Some go so far as to say that we should no longer discuss OA as something that may eventually “become the norm,” but rather should “begin acting in ways that acknowledge that Open Access is the norm.” ROARMAP is a product of the OA advocacy community, so it’s not especially surprising that its language would be constructed in such a way as to shape the discourse along similar lines: portraying guidelines and optional OA programs as “mandates” helps to build the impression of pervasiveness and inevitability that typifies OA advocacy rhetoric. It is less obvious, however, that this approach contributes to the scholarly community’s ability to discuss OA in a rational way.

Enhanced by Zemanta
Rick Anderson

Rick Anderson

Rick Anderson is University Librarian at Brigham Young University. He has worked previously as a bibliographer for YBP, Inc., as Head Acquisitions Librarian for the University of North Carolina, Greensboro, as Director of Resource Acquisition at the University of Nevada, Reno, and as Associate Dean for Collections & Scholarly Communication at the University of Utah.


54 Thoughts on "Errors and Misinformation in the ROARMAP Open Access Registry"

This is a shame, because if accurate, something like this would be a tremendously useful resource for journals. We all want our authors to be in compliance with their institutions and funders. Having a comprehensive resource where we could both know exactly what they need, and where we could send them so they could be sure they know what they need would be really helpful.

Perhaps this would be a useful project for a publishing society to take on.

Honestly, I don’t think it’s at all too late for ROARMAP to serve this function; I think its bones are good, as the real-estate people say. But getting it shaped up will cost time and money. The site’s owners will need staff who can go through the entries systematically, fixing the misleadingly-redacted policy summaries, adding new summaries where none currently exist, checking and updating links, and adding links where they don’t currently exist (which, I found, will in some cases mean spending a fair amount of time searching university web spaces for policy documents). Then, of course, they’ll need one or more people to monitor the site on a regular basis to watch for link rot, and to solicit and manage the submission of new policies that will need pages of their own. If the site grows, the cost of maintaining it will grow as well. It’s not clear where this money will come from. Maybe ROARMAP could start charging a small fee to institutions that want to use the site as a compliance tool…

I wonder if it’s something that the new ALPSP/CCC OA Resource Center could include in future?

A fine investigation Rick. I, like many no doubt, had heard of the so-called hundreds of mandates, many of which are not as you point out. On the other hand requiring specific opt out, especially at the article level, facilitates political pressure to conform. So these are not exactly voluntary programs. They lie in a murky middle ground.

The interesting fact is at many institutions with mandates that the faculty senate may have passed, the overriding nature of faculty members is to be independent and they simply ignore the mandate. On the other hand even though they have signed the publishers copyright agreement they ignore that document as well and put up their article anywhere they want. I don’t think anyone has mastered the nature of faculty independence. Mandates have little impact on faculty behavior.

“Mandate” is without question the wrong word to use. That has become the normative term, for what reason I do not know. It is inaccurate: no universities require open access without qualification. Ergo, these policies are not mandates. And many in the OA movement also oppose the use of the term. Examples abound:

> Suber, “Open Access” “Unfortunately, we don’t have a good vocabulary for policies that use mandatory language while deferring to third-person dissents or offering first-person opt-outs. Nor do we have a good vocabulary for policies that use mandatory language and replace enforcement with compliance-building through expectations, education, incentives, and assistance. The word “mandate” is not a very good fit for policies like this, but neither is any other English word.” (sec 4.2)

> Emmett et al., “It Takes a Village” “It was quickly learned
that words such as “mandate” and “compliance” are so unpleasant to faculty that faculty working on the policy dropped them from use” (p. 11)

Your point about the ROARMAP is accurate: it should include the entirety of the policy. In fact, it’s harmful to the OA movement to neglect to point out the opt-out waiver, as it is evidence that these policies are respectful of the right of authors to do with their copyright what they will.

As to your inference that the opt-out clauses are omitted for some rhetorical purpose, I have no comment, avoiding attribution of intent without certain knowledge.

As we learn to navigate these open access waters, editorial choices that seek to summarise – but end up misleading – will be common. I refer to Angela Chochran’s post on this blog from days ago:

“Universities are exerting copyright authority over manuscripts that are to be submitted to journals.”

I followed her link to find out which institutions were seeking to control author copyright and found myself reading the UC system policy, which seems very similar to the OSU policy you mention above.

What I read is that the UC system only seeks a non-exclusive license to copyright and not control of copyright. [I imagine that Angela may have used correct legal terminology to express exactly this, but it was not how I interpreted the words as a lay reader]. The key point is that this policy too has an opt-out clause. Should Angela be criticised alongside ROARMAP for editorialising to emphasise the intent of the policy rather than explaining the full detail..?

Should Angela be criticised alongside ROARMAP for editorialising to emphasise the intent of the policy rather than explaining the full detail..?

To the degree that Angela’s summary was incorrect or misleading, then yes, absolutely — the problem should be brought to her attention and she should correct it.

Apart from the principle involved (an author’s responsibility to present information accurately and honestly), however, there is another dimension at play here, and that’s the dimension of impact. One arguably inaccurate sentence in Angela’s posting does not have the same impact as a public registry that presents itself as an authoritative source of information about hundreds of institutional policies but systematically misrepresents them in multiple ways. So I think it’s fair to say that while Angela (or I, or you, or anyone) should be held accountable for whatever we say in a public forum and should be expected to correct our errors or misrepresentations, I think it’s also fair to say that the problems I identified with ROARMAP represent a bigger problem that should be addressed with a somewhat greater degree of urgency.

🙂 I’m not trying to challenge the points you are making, but I wonder what the readership and perceived authority of ROARMAP might be compared to this blog which carries the branding of SSP (as well as a disclaimer). ROARMAP doesn’t appear to me to be any more authoritative than any other wiki and less so than Wikipedia (which has independent editors). It looks to me like a community effort to share information…

There’s a big difference between SK and ROARMAP, though, and that is that ROARMAP presents itself as a registry of facts. SK is an op-ed publication. That said, SK’s writers do have an obligation to be accurate in what we say, and it sounds like we agree on the important point here, which is that all of us should be held accountable for the accuracy of our public representations. That applies regardless of how authoritative those representations are generally expected to be.

What ever happened to my mother’s axiom, “Anything worth doing is worth doing well”? Copying a link accurately to a database is not exactly rocket science, and checking for dead links should be routine maintenance. Missing, incorrect, or dead links are a mark of sloppiness. The examples you’ve shown also are indicative of a superficial reading of institutional policies. If ROARMAP were a literature review submitted to my journal it would be roundly rejected.

An unreliable database can be worse than none at all. As you point out, the incompleteness results in a biased view of OA “mandates.” Where are all the big supporters of OA that nobody is stepping forward to support a database that does it right?

I’ve read a piece again, as you asked me to do on Twitter, and I am still unsure what was the purpose of your text. I doubt that you search for mistakes on ROARMAP to improve its quality. I think that you do so to create an impression that OA is not as important trend as its advocates may imagine. If you think so, I am afraid that you are looking in wrong place, because policies are much less important than social demand for them. Policies will change fast and will become more and more strict about openness because of strong OA advocacy movement. And number of mistakes in ROARMAP will not change anything here.

I’ve read a piece again, as you asked me to do on Twitter, and I am still unsure what was the purpose of your text.

The reason I suggested you re-read my piece was that you tweeted a fundamentally inaccurate depiction of what I said in it.

I doubt that you search for mistakes on ROARMAP to improve its quality. I think that you do so to create an impression that OA is not as important trend as its advocates may imagine.

No, you’re mistaken again. I didn’t actually go searching for mistakes in ROARMAP. Just the opposite, in fact: I went to ROARMAP (in the course of researching my earlier posting about mandates) expecting to find accurate information. I was surprised and disappointed to find it filled with errors, and disturbed to see what looked like a pattern of deliberate misrepresentation. To me, the issue is not so much whether advocates are right or wrong about the importance of the OA trend — the issue is whether ROARMAP is a reliable source of information, and whether the people behind ROARMAP are using it to create a false impression. It seems to me that anyone who considers himself a friend of OA would care about these issues as well.

ROARMAP is a registry for institutional and funder OA policies:

X-Other (Non-Mandates) (86)
Proposed Institutional Mandates (6)
Proposed Sub-Institutional Mandate (4)
Proposed Multi-Institutional Mandates (5)
Proposed Funder Mandates (12)
Institutional Mandates (202)
Sub-Institutional Mandates (43)
Multi-Institutional Mandates (9)
Funder Mandates (87)
Thesis Mandates (109)

The distinction between a mandate and a non-mandate is fuzzy, because mandates vary in strength.

For a classification of the ROARMAP policies in terms of WHERE and WHEN to deposit, and whether the deposit is REQUIRED or REQUESTED, see El CSIC’s (la Universitat de Barcelona, la Universitat de València & la Universitat Oberta de Catalunya) MELIBEA

For analyses of mandate strength and effectiveness, see:

Gargouri, Y., Lariviere, V., Gingras, Y., Brody, T., Carr, L., & Harnad, S. (2012). Testing the finch hypothesis on green OA mandate ineffectiveness. arXiv preprint arXiv:1210.8174.

Gargouri, Yassine, Larivière, Vincent & Harnad, Stevan (2013) Ten-year Analysis of University of Minho Green OA Self-Archiving Mandate (in E Rodrigues, A Swan & AA Baptista, Eds. Uma Década de Acesso Aberto e na UMinho no Mundo.

Further analyses are underway. For those interested in analyzing the growth of OA mandate types and how much OA they generate, ROARMAP and MELIBEA, which index OA policies, can be used in conjunction with ROAR and BASE, which index repository contents.

PS I think it is not only appropriate but essential that services like ROAR, ROARMAP, MELIBEA and BASE are hosted and provided by scholarly institutions rather than by publishers. I also think the reasons for this are obvious.

If it’s accurate and useful, why should it matter who provides it? Publishers need this information to best meet the needs of authors. It’s in our interest to keep track of such policies.

Further to David’s comment, Stevan, I’m not sure it’s wise to invoke ROARMAP in support of your argument, given its manifest weaknesses as a source of complete and accurate information about institutional OA policies.

The distinction between a mandate and a non-mandate is fuzzy, because mandates vary in strength.

Actually, the distinction between them is pretty clear: it’s not a mandate if compliance isn’t mandatory. Calling a non-mandatory policy a mandate — and quoting it very selectively to make it look like a mandate when it is not — serves no purpose other than a propagandistic one (“Look at all the mandates!”).


If there is any party whose interests it serves to debate the necessary and sufficient conditions for calling an institutional or funder OA policy an OA “mandate,” it’s not institutions, funders or OA advocates, whose only concern is with making sure that their policies (whatever they are called) are successful in that they generate as close to 100% OA as possible, as soon as possible.

The boundary between a mandate and a non-mandate is most definitely fuzzy. A REQUEST is certainly not a mandate, nor is it effective, as the history of the NIH policy has shown. (The 2004 NIH policy was unsuccessful until REQUEST was upgraded to REQUIRE in 2007: )

But (as our analyses show), even requirements come in degrees of strength. There can be a requirement with or without the monitoring of compliance, with or without consequences for non-compliance, and with consequences of varying degrees. Also, all of these can come with or without the possibility of exceptions, waivers or opt-outs, which can be granted under conditions varying in their exactingness and specificity.

All these combinations actually occur, and, as I said, they are being analyzed in relation to their success in generating OA. It is in the interests of institutions, funders and OA itself to ascertain which mandates are optimal for generating as much OA as possible, as soon as possible.

I am not sure whose interests it serves to ponder the semantics of the word “mandate” or to portray as sources of “errors and misinformation” the databases that are indexing in good faith the actual OA policies being adopted by institutions and funders.

(It is charges of “error and misinformation” that sound a bit more like propaganda to me, especially if they come from parties whose interests are decidedly not in generating as much OA as possible, as soon as possible.)

But whatever those other interests may be, I rather doubt that they are the ones to be entrusted with indexing the actual OA policies being adopted by institutions and funders — any more than they are to be entrusted with providing the OA.

This is now the second time you have publicly asked a university librarian to suppress the truth because it is harmful to your cause ( It is perhaps laughable for you to be questioning the trustworthiness of others when you show a repeated pattern of such deliberate deception. I’m still not sure what secret anti-OA agenda you think university librarians have or what is gained through hiding the truth.

Are you that unsure of OA that you feel it can’t stand on its own merits? For OA to thrive in the real world it has to face up to critical analysis, its problems must be exposed and solutions must be sought. It cannot be treated as religious dogma and those who question it treated as heretics. ROARMAP is clearly flawed. Should we work on doing better or just accept it as it is on faith because that’s what you want the world to believe?

Why spend so much effort being concerned with who is doing something rather than concerning yourself with what is being done? The word “open” suggests that all can participate. If OA only allows for your small, private group of friends to be involved, it’s hardly “open”. Such strict limits slow progress by preventing experimentation and halting expansion.

You seem to be trying to fight a battle that’s already been won. OA is no longer about good guys versus bad guys, it’s about practical, real world implementation of a business model. Make it work, prove it’s superior and the world will follow.


“Suppress the truth,” “deliberate deception,” “hiding the truth,” “religious dogma,” “heretics…”

Strong words!

ROARMAP is open to any corrections. Entries can be corrected by the external user who registered the policy or by emailing the user who registered the policy (not by emailing Tim Brody, who is no longer in the department of Electronics and Computer Science at University of Southampton, the host of ROARMAP).

Public postings decrying “errors and misinformation” may be ways of generating attention but not of generating corrections in ROARMAP. (I am not a regular reader of the Scholarly Kitchen.) ROARMAP is crowd-sourced (any user can register, and can register a policy), but not via the Scholarly Kitchen.

As to why it is that university librarians might favor publisher interests over research interests: that question has to be addressed to the librarians in question. (I believe the reverse is true of most university librarians.)

One error that I can correct on the spot here, though, is that Open Access (OA) is not a “business model” — at least not for the research community (i.e., those who actually provide the research).

OA means providing free online access (“Gratis OA”) (and sometimes also certain re-use rights: “Libre OA”) to peer-reviewed research.

There are two ways for researchers to provide OA to their research: (1) to publish in any journal whose peer-review standards their research can meet, and to make their final, peer-reviewed version OA (“Green OA”) or (2) to publish in a journal that makes the publisher’s version-of-record OA, sometimes in exchange for a publication fee (or Article Publishing Cost, APC) (“Gold OA”).

Only Gold OA is a “business model.”

One quick note for the record:

The reason I brought ROARMAP’s problems to the attention of Tim Brody is that his email address (without a name) continues to be the only one provided by ROARMAP as the contact for “any correspondence concerning this specific repository.” You say that the proper method of submitting corrections is to contact the person who registered the policy details, but if that’s the case then the site should say so somewhere. In any case, in the great majority of cases I checked, the contributing user was either you or Tim Brody, and in neither case is any contact information provided in the entry. (In a few cases that I checked, email addresses of the contributing member are provided.)

This is yet another serious problem with ROARMAP as a tool: if it’s intended to be self-correcting, then it is structurally unsuited for it. And it seems to me that anyone who truly cares about the health and future of Open Access ought to be more concerned with finding and fixing ROARMAP’s problems than with making dark insinuations about the motivations of those who bring the problems to light.

1. I believe you know my email address if you ever need to contact me. If not, it’s: (It’s faster than posting to the Scholarly Kitchen blog, which I don’t read unless someone draws my attention to something they think I should see.)

2. ROARMAP is a policy registry. Each entry has a “depositing user.” For the one you specifically called into question in your posting (Emory University: ) the depositing user’s email (as indicated) is :

3. As far as I know, no one else has yet written to point out that they had no reply from Tim Brody. Thanks for pointing this out. The repository information email will be fixed Monday (but please continue to contact the depositing user if you have questions about any specific entry).

ROARMAP is without discussion the most famous directory about OA policies, but there are exist at leat 2 more, one of those is MELIBEA ( ), created by our working group, in which any OA policy is checked and linked to its web site, revising and updating information is very important to provide an accurate information, even though nothing is perfect!!

And here are the two splendid MELIBEA entries for the two ROARMAP mandates that Rick Anderson singled out as instances of “errors and misinformation”).

(This Thursday and Friday Alma Swan will be conferring at U. Minho about the creation of further spin-offs of the ROARMAP database, with ever more enhanced functionality. Meanwhile, Tim Brody has replied to me as follows [quoted with permission]: “Regarding the SK blog – if they had something positive to contribute I would pay attention, otherwise it’s just a waste of my time.” The contact details for ROAR and ROARMAP have now been updated: queries now go to me.)

Sad to hear one of the people running the database felt that including correct information was a “waste of time”. Glad to hear you’ve moved this responsibility away from this person.

Not at all. I agree completely with Tim in this particular case. (And what moved him away from ROARMAP was becoming a father! We merely updated the contact information.)

It strikes me that this sort of response continues the mindset that “who” does or says something is more important than “what” is being done or said. Always a bit confusing to me to see a movement that claims to be “open” that practices such exclusivity.

The MELIBEA entries for Emory’s and OSU’s OA policies are indeed much more detailed than ROARMAP’s. However, “spendid” is something of an overstatement, as they suffer from the same basic flaw as the ROARMAP versions: they fundamentally mislead the reader in regard to the mandatory nature of the policies in question. MELIBEA characterizes both institutions’ opt-out provisions as follows: “No opt-out of deposit but opt-out of immediate OA” (in the case of OSU it says “case-based opt-out of immediate OA”). But that’s not true in either case. The OSU policy says explicitly that “waivers are automatically granted,” and that in the case of a requested waiver deposit is “not mandatory.”

The Emory policy departs even more fundamentally from what is depicted in MELIBEA. Emory’s policy explicitly applies only to “articles the author has chosen to distribute as Open Access.” In other words, there is no mandate at all — not even a mandate of deposit. At Emory, the policy is an opt-IN policy — faculty are under no obligation to participate in any way, and don’t even have to request a waiver. It doesn’t constitute, by any meaningful definition of the term, a mandate.

So while MELIBEA is undoubtedly a more carefully-presented and thorough source of information about OA policies and mandates than ROARMAP, it is not (on the evidence provided here by Stevan) a more reliable one. In fact, Stevan’s examples demonstrate more egregious misrepresentation than what I found in ROARMAP.

I’m sorry to hear that Tim Brody feels queries from an SK writer are beneath acknowledgment or response. I’ll be happy to share the text of my email to him if anyone is curious as to its tone or professionalism.

I am very curious about your email, Rick, as I know how carefully you word things from some of our prior discussions. This has been a fascinating exchange, as well as illuminating.

One has to question the reliability of a crowdsourced resource. Complying with these mandates/requests is potentially critical for an academic researcher in order to continue to receive funding or to keep their job/advance in their career. Given the multiple erroneous entries you were able to find with just a cursory scan, I would worry about any researcher who would rely on this sort of database, rather than going directly to one’s funding agency or institution and carefully reading through every single detail of any relavant policy. So if these resources are inadequate for those directly affected by the policies tracked, who are they for?

As a publisher, it would be helpful for me to have an accurate database of these sorts of policies. That would make it much easier to set journal policies that best meet the needs of authors, their institutions and their funders. But, like most researchers, time is my most precious commodity. I don’t have the time to dig through an entire database and double check each entry to see if it’s right. Frankly, it’s more efficient to pay someone to curate a resource like this, and to hold them accountable for its reliability.

That’s why I think the publishing community would be better served by creating (and supporting) a resource of this kind in order to eliminate the uncertainty and the extra effort required.


I will make the charitable assumption that some Schol Kitch readers are researchers, interested in maximizing OA here, rather than just publishers interested in persuading the research community to “leave providing the OA [and the OA mandate information] to us [publishers]”:

The objective of OA mandates is to generate as much OA as possible, as soon as possible. There are many different mandate models, varying in strength and success.

The Emory and OSU mandates are variants of the Harvard default copyright-reservation mandate model. This is not a very strong mandate model, because in principle it allows opt-out on an individual case-by-case basis. The opt-out rate is only about 5%, however.

The version of the Harvard model adopted by Harvard FAS was upgraded so the opt-out applies only to the copyright-reservation clause, not to the immediate-deposit clause, but it still has no provisions for monitoring compliance, and no consequences for non-compliance. The Harvard FAS deposit rate is hence not much higher than the opt-out rate — despite the fact that about 60% of Harvard’s annual research article output is being made freely available somewhere on the web within a year of publication.

The UK has stronger mandates, such as the Soutampton ECS mandate, but the strongest and most effective mandate model is the Liege/FRS/HEFCE model in which immediate repository deposit (though not necessarily immediate OA) is officially designated as the submission mechanism for both research performance evaluation and research funding. Its annual deposit rate is over 80% and still rising. As a consequence, other institutions are now upgrading to this mandate model.

All these parametric variations of OA mandates are currently being systematically tested statistically for their success in generating OA in ongoing studies. ROARMAP is just a database for registering and linking the policies themselves, not for classifying or testing them. MELIBEA classifies the policy details in more detail, but there is no point trying to catalogue every possible mandate nuance in an index. These are not publisher copyright-restriction details: they are pragmatic institutional and funder policies, intended to generate as much OA as possible, as soon as possible. And their current full text is always linked and availeachthe mandate’s specific details.

I have already posted some of the references for this ongoing research above. Anyone interested in mandate strength and success (rather than just in decrying “errors and misinformation” and in calling into question the research community’s competence to adopt and index its own OA policies) can consult these and other references — or, better still, they can use ROAR, ROARMAP, MELIBEA and BASE to do further analyses. All findings are welcomed by all who are interested in reaching 100% OA as soon as possible.

There are many different mandate models, varying in strength and success.

True enough. It’s also true that dogs come in many sizes and colors. But no matter how small and fluffy it may be, a dog is still not a cat. And no amount of saying “yes it is yes it is yes it is” will make it one. Similarly, OA policies that are not mandatory are not mandates, no matter how hard one may wish them to be or how many times one may insist that they are.

And with that, I’m finished with this particular comment thread. One can only insist that a dog is not a cat so many times. Feel free to have the last word.

I don’t think it’s a question of competence, more that this is a complex undertaking that may require more effort than doing it as a part-time hobby and using crowdsourced approaches that can be both time consuming and unreliable. I suspect that with more funding and more time, those involved would readily admit that they could do more.

And creating a database to track OA requirements is 1) non-exclusive and 2) a completely separate activity from providing OA itself. If publishers need to track these requirements and create their own database to do so, how does that hurt you or your efforts in any way? Your efforts are not sufficient for my needs. Is having more than one database a bad thing?

“Is having more than one database a bad thing?”

Not at all! Let publishers track whatever they like, whatever suits their needs. ROARMAP is OA and compliant with the OAI metadata harvesting protocol: You can start from there if you like.

And I’m sure publishers will always have more dosh for this sort of thing than researchers ever will, especially since if it has bottom-line implications for them.

But a “part-time hobby”? Two and a half decade’s worth of dabbling? You really wound my feelings, David!…

Sorry if that came off the wrong way, given your history and volume of effort that’s not at all how I would characterize what you do. That said, you are a professor at Université du Québec à Montréal and at the University of Southampton. You are a tireless advocate for open access, and blog and post in great volume on a variety of websites and forums. Realistically, how much time in a given day can you devote to this one particular project?

However, if maintaining this database was the main focus of your job, odds are it would have more information, be more up to date and more accurate. For my needs, and the needs of academic authors who have their careers and funding riding upon compliance, the volunteer efforts of people with other full time jobs may not suffice. That’s why I’m suggesting that at least some of us (and that includes both authors and publishers) might be better served by an investment in paying someone to curate such a list as a primary responsibility. Let’s make it as easy as possible for researchers to follow the requirements of their employers and funders.

Just jesting about the scars, David: a quarter century will thicken one’s skin! But I appreciate the empathic response. I think we’ve reached closure now. At the end-of-week conference at Minho I believe Alma Swan will be announcing that she will be taking over and upgrading ROARMAP with the help of EU funding. Alma always does things incomparably better — and more elegantly — then I ever manage to do. But I am not deserting ROARMAP now it’s survived its 1st decade: I will be collaborating with Alma in the upgrade and hoping for a teen-age growth spurt.

The question is whether this upgrade will include accurately describing the rule systems being collected, along the lines Rick presents here, beginning with not calling them mandates. It might be better if publishers did this as a service to their authors. The number of variables in these rules may or may not be large but the number of combinations is probably huge, as is the potential for confusion. Thus accurate representation is not trivial.

DW: “The question is whether this upgrade will include accurately describing the rule systems being collected, along the lines Rick presents here, beginning with not calling them mandates. It might be better if publishers did this as a service to their authors. The number of variables in these rules may or may not be large but the number of combinations is probably huge, as is the potential for confusion. Thus accurate representation is not trivial.”

Reme Melero’s much-improved directory MELIBEA already does this classification extremely well. The most relevant parameters are:

opt-out (waiver)?
copyright reservation?

This generates a weighted estimate of mandate strength. The weighting parameters will have to be adjusted according to actual studies of the correlation between these parameters and deposit rate. This is what Dr. Swan’s database will add, together with growth charts.

So, no discernible need for a version from “publishers… as a service to their authors” — but they’re welcome to do it, as I’ve mentioned. All these databases are OA, harvestable and remixable.

I suggest, though, that the ScholKitch chefs stop fussing about the word “mandate”: A simple dictionary look-up will show that it is polysemous (as between permitting, sanctioning and requiring). So the pertinent features are whether or not OA is required (i.e., “mandatory”) and whether or not the requirement can be waived.

Some further parameters that will need to be added are:

compliance monitoring (how)?
consequences for non-compliance?
deposit as condition for evaluation?

The primary need for these parameters, however, is not so much “as a service to authors” (who need only know whether, when, where, what and how to deposit [and why!]) but as a service to policy-makers (institutions and funders), in order to demonstrate quantitatively the importance and effectiveness of the various policy parameters for optimizing their mandates.

Speaking of errors and misinformation, I see that the latest entry on Stevan Harnad’s “Open Access Archivangelism” blog leads with the following sentence:

In the Society for Scholarly Publishing’s Scholarly Kitchen, Rick Anderson complains of “errors and misinformation” in the ROARMAP registry of OA mandates and calls for publishers to provide this service instead.

As anyone who reads my posting will see, I have called for nothing of the sort. I’ve submitted a comment to Stevan’s blog and also asked him to correct this misrepresentation, but I have no idea whether the comment will be posted or the blog posting corrected, so I’m placing the correction here for the record.

His characterization of my suggestion is also inaccurate. By using the word “instead” he suggests a zero sum game where any actions or contributions from publishers would eliminate or prevent resources from other groups. This “us against them” mentality continues to plague some advocates who have been unable to move past what Cameron Neylon calls the “angry protest movement” stage of OA to the practical implementation stage. Having a listing of policies created by publishers would in no way harm ROARMAP, just as MELIBEA does not harm ROARMAP.

I remain confused by a movement that calls itself “open” while only selectively allowing those who pass a litmus test to participate.

DC seems to think that “I think the publishing community would be better served by creating (and supporting) a resource of this kind” is inaccurately described as “instead.” How about “rather” or “preferably”? (There seems to be a preoccupation in this Kitchen with terminology rather than substance, wrapped in the rhetoric of “errors and misinformation.”)

Words have meaning. As a scientist, would you publish a paper that freely interchanged the words “necessary” and “sufficient”? “Gene” and “allele”? “Axon” and “dendrite”? Wouldn’t that entirely change the meaning of your paper?

“Instead” means “in place of” or “to the exclusion of” so it is inaccurate and greatly changes the nature of my statement, which was to express the value of the idea behind ROARMAP but to note that it falls short of meeting my needs, and suggesting the creation of a version that meets those needs. How about the phrase “in addition to” or “as well as”, which would more accurately portray my viewpoint.

Also, “skullduggery” means underhand, unscrupulous, or dishonest behaviour or activities. How is asking for accuracy from an existing resource or suggesting the creation of a (likely freely available) useful resource any of those things?

Science! In the (virtual) annals of the Scholarly Kitchen!

“‘skullduggery’ means underhand, unscrupulous, or dishonest behaviour”

And it also makes a good alliteration on “scholarly scullery”…

But “suppress the truth,” “deliberate deception,” “hiding the truth,” “religious dogma,” “heretics” etc. ain’t quite science either…

Now this has been fun; but all good things must come to an end. My interest is in substance (and OA), but the schefs seem to have more of a taste for semiology — with a dash of mudslingery…

So I’m back out of the kitchen: Compliments to the cuisiniers!

Those words were carefully chosen and remain accurately descriptive as far as I’m concerned. I think one of our interests here in the Kitchen is indeed accuracy. Facts are facts, and the words chosen to describe reality matter. If you have so little regard for accuracy then my earlier suggestion of building a separate database from ROARMAP is even more important if it is to be a reliable resource.

Also, perhaps we should be given credit for our openness and willingness to provide a forum for discussion, as you seem unwilling to do for the comments I’ve left on your blog. “Suppressing the truth” indeed.

My blog asks for confirmation from the email address you provide. Rick confirmed, so his comment appeared. Your two comments keep waiting in the buffer for your confirmation. I don’t know how to over-ride that (though I even tried removing the anti-spam: it didn’t work). I’ve appended your shorter comment and my reply under my own reply to Rick. Try re-posting the long one, but give an email address that won’t block the confirmation message. (I don’t know any other workaround with the Serendipity software.)

Thanks–I had confirmed the subscription to the journal and hadn’t realized one must repeatedly confirm one’s email address for every single comment left. I withdraw my statement above and apologize.

That said, you blog still mischaracterizes my statement, regardless of how you replied in the comments here.

Indeed it is not a zero sum game. Authors and publishers basically need a system or portal that facilitates compliance at the article level, by clearly communicating what is and is not required in each instance. This is not ROAR’s function.

RA: “I have no idea whether the comment will be posted or the blog posting corrected”

Comment posted. Call now attributed to David Crotty, as requested.

Comments are closed.