The Open Gate.
Image via Wikipedia

There are a few key ways to make content free to the user. Some journals do it selectively, based on editorial criteria most often. Open access advocates seek to change the information landscape by flipping the payment model to an author- or sponsor-pays model, allowing readers to access information without toll barriers. Other publishers, mostly outside of STM publishing, use ad models, metered or not, to grant free access to large audiences.

For open access advocates and editors choosing to make information free, the idea is that access barriers are the only things keeping people of all types — patients, students, parents, and professionals — from a vast storehouse of information. Removing the barrier of a paywall supposedly unleashes a vast treasure trove of knowledge to an eager, adamant audience.

Even if you accept that access was and continues to be a problem — a stance that usage data often belie and that studies find to be hard to justify with facts — the abundance of information open access and other models (e.g., advertising-supported, search optimization supported [content farms]) has created may have only revealed new barriers, some much more difficult and more costly to address. A recent, compelling essay by Maria Popova from the Nieman Journalism Lab raises the possibility that abundance has created new types of scarcity — a scarcity of motivation, a scarcity (via obscurity) of rare items, and a relative scarcity of useful curators and guides.

Popova makes an interesting distinction to begin with, one that open access rhetoric might do well to pay attention to, for obvious reasons — that is, Popova draws a distinction between “accessibility” and “access.” From her perspective, what we would call “open access” is really “open accessibility,” meaning it’s possible for people to access the published information without encountering a paywall. That’s accessible. But “access” is something more complex — it has to do with knowing the information exists, being able to find it, and then accessing it.

It isn’t access until it’s accessed.

The era of abundant information may actually have a demotivating effect on people, Popova suggests. It’s like when you live in a city but get stuck in routines and never explore it. Visitors arrive, enthusiastic about visiting all the spectacular places they’ve traveled hours to see, while you sheepishly realize that you’ve never been to any of them even though you live mere minutes from them. These great places are so accessible to you that you never bother to visit them. Just knowing they’re nearby is enough, and is actually somewhat demotivating. You’ve traveled around the world to see a 300-year-old building, but never once visited the one 20 minutes away. Abundance and accessibility actually demotivate exploration. As Popova writes, when we encounter a great new resource . . .

. . . more likely than not, we shove it into some cognitive corner and fail to spend time with it, exploring and learning, assuming that it’s just there, available and accessible anytime. The relationship between ease of access and motivation seems to be inversely proportional because, as the sheer volume of information that becomes available and accessible to us increases, we become increasingly paralyzed to actually access all but the most prominent of it — prominent by way of media coverage, prominent by way of peer recommendation, prominent by way of alignment with our existing interests. This is why information that isn’t rare in technical terms, in terms of being free and open to anyone willing to and knowledgeable about how to access it, may still remain rare in practical terms, accessed by only a handful of motivated scholars.

EndNote, citation management tools, special directories on our hard drives, bookmark lists, and so forth — all are ways to know where information is without actually accessing it fully. Many times over the past decade, I’ve encountered users who manically store PDFs and images on their hard drives, yet never access them again. It’s the information packrat mentality — these same people probably would have ripped and filed articles in another era. But once they “have it,” their motivation to explore, understand, or contemplate the information evaporates. Accessibility is actually demotivating to access. If I know I can get it any time, why bother?

Then there’s the problem of what people know about and share, store, or recall. Popova argues that because general search engines drive so much awareness — and because the first page of results dominate — obscure items are made even more obscure, despite their accessibility. In fact, general accessibility amplifies the prominence of general information resources (recall the rise of Wikipedia entries in search results a few years ago), making perhaps more valuable but less accessed information even more obscure than it was before.

The asymmetry of search engine algorithms makes the rare even rarer.

Dealing with this obscurity is a challenge curators must answer, Popova believes, offering several compelling examples of hidden gems that were as accessible as anything, but not accessed much — until a curator with an audience pointed to them and explained why they mattered. When this happened, access shot through the roof, even though accessibility changed not a jot. As Popova writes:

. . . since curiosity is the gateway to access, we can’t outsource access, even in the context of the greatest possible accessibility.

Publishing more and more information creates a commensurate obligation to curate, highlight, explain, elucidate, point out, identify, contextualize, and share the information we’re putting out for our audiences. At our PowerPoint Karaoke session at the SSP Annual Meeting in Boston, Geoff Bilder suggested that a valuable new service might be to curate out the most interesting papers from PLoS ONE. In concept, I completely agree. In practice, who would or could pay someone to do that? PLoS ONE is a mishmash of domains, while commercial models are usually successful when they’re targeted at an audience, not around a storehouse of content. Which audience would the curator address?

Thinking that accessibility is enough to guarantee access falls well short of the mark. However, paying curators, shepherds, editors, guides, and commentators is something we don’t have a model for right now, at least not at the scale we need. And there is often no academic incentive for doing it.

Yet, in order to turn the potential into the actual, that’s precisely what our information age might need the most.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

18 Thoughts on "Does Access Create New Types of Scarcity?"

If ‘accessibility’ is there, then ‘access’ can have forms other than just humans downloading articles. That is to say, when the literature is also accessible in the sense of being machine-readable. We need ways to assess, as comprehensively as possible, the overwhelming amount (and growing) of available knowledge, how it ‘hangs together’ and where it moves to (including taking into account hitherto mainly hidden negative results) and that cannot be done by just reading. There simply is too much. Read this article in the BMJ by Fraser & Dunstan if you need to be convinced: “On the impossibility of being expert”, BMJ 2010; 341:c6815, doi: 10.1136/bmj.c6815.

Creating useful overviews, from which at any one time you can still go to the underlying article to read the narrative if desired, will help tremendously. But those articles would have to be accessible, of course, or they are even less likely to be consulted. Systems such as the EU Innovative Medicine Initiative’s project OpenPHACTS are making great strides in addressing the issue of extracting and (re)presenting vast amounts of scientific information in usable, comprehensive, and intuitive ways (http://www.openphacts.org/).

A cursory examination of OpenPHACTS.org makes me wonder — led by a Pfizer representative, funded by an industry group, etc. This is the issue — funding of these syntheses of information. If it’s industry behind it, is the objectivity there?

OpenPHACTS is a public-private partnership between the EU and a wide consortium of universities and companies, large and small. Because the aim is to come to technical solutions and an open platform (note the ‘open’) to be able to deal with, reason with, create overviews, etc. of attributable scientific information, and not to judge the content itself, objectivity is not really a relevant concept in the context.

I call what you are talking about findability, and yes the more we have access to the harder it is to find the right stuff. But this is where so-called smart search technology comes in, and there is a lot of development going on. It is a huge challenge in cognitive science and artificial intelligence, one of the great scientific problems of our day. And we are making great progress, so I am optimistic. We are keeping up with the glut, as it were.

But in any case, it is certainly not a reason to restrict access.

Yet, as this provocative essay points out, the technologies we’re using often conceal as much as they reveal, and they certainly don’t know how to “sell” a project or capture the imagination of an audience, at least as far as I know. For many, the information glut feels overwhelming still, and coherent stories about what it all means and could mean are lacking.

It is a mistake to ask Google to do all our work for us. Yes, search algorithms can find things, but as the mass of information grows, online search itself becomes overwhelmed. I should add that it is a profound mistake to believe that something must be open access in order to be searched and discovered. This is entirely a function of Google’s policies. Google already indexes book content that is not open access, but Google will not do this for the general Google Web Search.

There’s a different type of “access” that often goes unremarked upon in these sorts of discussions, and that’s whether the reader, regardless of the “accessibility” of the material, has the expertise required to access the information presented in the material. Discussions about access often talk about making research material available to the general public, but never seem to acknowledge that most of the general public doesn’t have the training to read a scientific paper.

I have a degree and have been reading scientific papers for over 25 years, yet once I’m out of my field (biology), the going gets extremely tough. I’m not going to glean many of the details from a mathematics or physics research paper, even with my experience. How then, is my grandmother going to get anything out of having access to those papers?

Journal articles are necessarily written in jargon, with shorthand terms likely unclear to a non-expert. They assume a body of knowledge in the reader, otherwise each individual article would stretch to the length of a textbook. So let’s say the entire world switches to an author-pays model. Grant money still goes to the publishing industry, albeit through a slightly different channel, and for the non-expert, what really changes?

A curated collection of PLoS ONE journals that Geoff proposes sounds very much like the “overlay journal” concept that has been kicking around for sometime for subject repositories like arXiv.

While interesting in theory, no one has figured out how to make them work and whether anyone is willing to pay for such a service.

When you start working through all of the caveats, you end up with something that resembles a traditional journal.

Right now, the “overlay service” du jour is Twitter. Lots of people do actually provide free curation of what they are passionate about. This is not at all systematic, but the reality is that a lot of us have come to depend on it just as we came to depend on Google.

Let’s not forget the element of trust. One reason the Twitter model works for me is that I know who’s pointing me to something and I can pick whom I want to follow. One reason I have come to basically ignore discussion groups on LinkedIn etc. is that they are dominated by — how can I say this tactfully? — the naive or ill-informed or curious-but-clueless. (Tactful enough?) Sure, when Paul Topping has something to say about MathML, I want to know what he has to say. But frankly there is so much dreck in most of those conversations that the signal-to-noise ratio is just too low to bother with. (I’ll make an exception for one on HTML5 that I let myself get sucked into a few months ago, to my admittedly great benefit. Took a lot of time, but it was a great conversation.)

Another big curation source for me is the many industry or standards working groups I belong to. Those are full of smart, committed, articulate people — and most of those conversations are totally public. Hardly a day goes by that I don’t get pointed to something invaluable that I never would have discovered on my own.

I guess what I really want to point out is that especially in academia, compensation comes in the form of respect, reputation, etc. — and the need to contribute (which is what motivates all those working group folks) — more than money. Curation happens.

The problem noted here means that if and when scholarly monograph publishing becomes largely OA, there will still be a need for (1) marketing by publishers to targeted audiences to stimulate access and (2) reviews in respected academic journals to assess the value of the content.

What you’re talking about here, in part, is the idea of the new economy of attention combined with the our very human habit of wanting what we can’t have more than what we can. In regards to the your connection to the former idea, yes, our constant “piling on” of information “solutions” means that it becomes more and more difficult to get your customers and ultimately your readers to spend their increasingly spread-thin “attention dollars”. But I’m not sure that the latter point is as important as you make it out to be. At least in terms of academic information, the primary desire is for the content itself. Any blurring of that desire by our desire for the unattainable is minimal at best. Or at least, I would hope so in the case of competent academics. To some degree this behaviour modification can be seen in the decreasing numbers of cited works in bibliographies as electronic resources (and more importantly, online indexes) have flourished. I guess the theory is that when users had to work hard to find material, they kept every little morsel they could find. But when it’s easier to find, they can be more selective and (in my mind) honest regarding what’s truly relevant. I think this applies to OA as well. There are fewer barriers to get at content so it’s not as much of an issue if it’s found but not referenced. You haven’t wasted as much time so it “hurts” less. But the drop isn’t huge so it’s not a major factor. It actually seems like it for the best. It’s clarifying the web of literature.

Comments are closed.