Though often used in a pejorative manner, the concept of the “ivory tower” is based on a separation from the concerns of the everyday–the idea of academia serving a higher purpose than the rat race of business. Last week Rick Anderson wrote about the higher standards of behavior required from scholarly publishers, and watchdogs continually call out any deviation from the strict rigor that characterizes academic research itself. As technology startups continue to hold more sway over scholarly communication, should we similarly hold those companies and their business practices to the same high standards? Academia.edu’s recent marketing campaign offers an interesting case study.
In the era of the app, everyone is a developer. The lure of the Silicon Valley gold rush draws in more and more former researchers. Given how rare academic jobs have become, the modus operandi for many leaving graduate school is to draw from one’s limited experience, to focus on one part of the research process that was found to be difficult, and to create an online service to address it. Suddenly a startup business is born. Some of these businesses are bad ideas, many are redundant, yet some few offer interesting potential new directions.
Profit is a powerful motivational tool–it’s why in the US we have the Bayh-Dole Act which allows researchers and their institutions to own the intellectual property produced from taxpayer-funded research. The for-profit startups are where much of the interesting action can be found in the world of scholarly publishing. There’s nothing wrong with patronizing a for-profit service, but should we ask that such services live up to our community standards?
We rail against predatory publishers, just as we frown upon journals caught gaming the Impact Factor. And so we should. But shouldn’t we take just as stern a look at misbehavior from startups that look to play a part in the scholarly community? For example, a company offering researchers an increase in citations through use of its service (a service which seeks to profit from the data it aggregates about its users), with no evidence to support such claims? I am of course referring to the marketing campaign built around the flawed and un-reviewed study commissioned by Academia.edu discussed yesterday in the Scholarly Kitchen.
For a researcher to gain credibility, their work must go through peer review. Whether the journal reviews for significance or simply for accuracy, one key question must always be answered: are the conclusions supported by the data? Here we have a company asking for the trust of the academy that wasn’t willing to follow that one guiding principle.
The problem is seen in the marketing campaign promoting the study, which glosses over even the authors’ (Academia.edu’s own employees!) more cautious conclusions. Compare the following two statements:
“While our study is observational, and it is difficult to conclude a causal effect, we do find the citation advantage is substantial even after controlling for some potential sources of selection bias.”
“A recent study found that papers uploaded to Academia.edu receive an 83% boost in citations over 5 years.”
The first statement comes from the research study itself and while problematic given the flaws in the study, it at least clearly acknowledges the lack of proof of any causality whatsoever. The second statement, used in Academia.edu’s advertising campaign and on the homepage of their website essentially promises the reader that if they use this product, they will indeed receive a free lunch.
If we are deeply skeptical of researchers who practice “science by press release” then what are we to make of Academia.edu? They sponsor a study and rather than submit it to peer review, they send out it to millions of customers, and feature it in a widespread public relations effort including articles in major mainstream publications. A quick look at the data shows a major flaw in the study, but the cat is already out of the bag.
A better case for the dangers of replacing pre-publication peer review with post-publication peer review could not be made.
We would drum any scientist who tried to pull a fast one in this manner out of the academy in a heartbeat. Shouldn’t we hold startup companies, particularly one that, “wants to become a full-fledged academic publisher” to the same standard?
Even if any of the supposed citation advantages of various products that have been touted were true, they are short-lived at best. If they actually offered a cynical but effective cheat code for citation, then everyone would sign on and the playing field would quickly once again become level. But venture capital investors only need to think in the short term. Build it up and sell it off, let the long term be somebody else’s (or everyone else’s) problem.
Silicon Valley, and the venture capital business model runs on a very different set of principles than the academy, and ultimately serves different masters. The question one must always ask when dealing with any private interest on any level is, as Jason Kottke recently pointed out, “who’s the customer?” For a startup, the focus is on building value for the company’s investors rather than serving the needs of the users or the research community.
We often complain about the Impact Factor, particularly that it lacks transparency and reproducibility, part of the consequence of relying on a megacorporation owned mostly by a holding company. Yet the suggestions for new metrics are often just as privately-owned and opaque. So too efforts by startups to create systems for peer review credit, peer review itself, discovery, article sharing, social bookmarking, making public and archiving research data, and so forth shift academic functions further into private hands and behind closed doors.
When we rely on Google or Google Scholar, we need to be aware that the main purpose of Google is to serve the needs of advertisers. Google shows us what its advertisers want us to see, not the best objective answer to our queries. What portion of the scholarly literature effectively disappears from view when Google shifts its algorithms to better serve the needs of those advertisers?
If we are to build a system of career credit and reputation assessment that uses social media like Facebook and Twitter, we must again acknowledge that those systems are not designed to be transparent, consistent, or fair; rather that they’re designed to maximize the data they collect for the benefit of advertisers. As the users, we must understand that each of these companies actively degrades the user experience for the benefit of its real customers.
Given the ever-increasing number of entrants into the market, scholarly communication remains a strong draw for businesses. The academy is essentially in the driver’s seat here and has the power to set standards for what is acceptable.
Can’t we do better than this?