According to Wikipedia, a megajournal is, “a peer-reviewed academic open access journal designed to be much larger than a traditional journal by exerting low selectivity among accepted articles.” The term itself has an unclear heritage, but was promoted memorably by Peter Binfield when he was at PLOS. Some say Binfield coined the term. Over the years, the “rise” of the megajournal has been a consistent source of excitement or concern, depending on your perspective, and the apparent success of the megajournal model has attracted a number of imitators and competitors.
Toward the end of March, I decide to tour a few of the megajournals that have cropped up over the last decade in the midst of many older homes.
This tour was done more out of architectural interest than anything. Just as McMansions (extra large houses with extra rooms and many large rooms) emerged as a fad in real estate, and only later were understood to represent a long-term trend and preference as “open” design plans and larger houses became popular – with kitchens, family rooms, and dining rooms barely distinguishable – megajournals are our new(ish) kids on the block. Do they represent a long-term trend? What makes them different? And which parts might persist beyond the fad stage?
Caveats for this tour abound – I did not spend more than 30 minutes on any one site, I visited each site only once (so that day’s experience is only a sample of the overall experience), and I didn’t tour all of the megajournals. I limited my tour to four that come to mind when I think of the category – PLOS ONE, Scientific Reports, Nature Communications, and Heliyon. I drove by a few others on the trip, so they get some mention.
Here’s what I found.
Big variations in the volume of articles. PLOS ONE still takes the crown for volume of articles, publishing 100+ papers per day. Nature Communications and Scientific Reports are also publishing a lot of papers, but only about 10-20 per day. It appears Heliyon is struggling to meet the megajournal pace, sometimes publishing nothing for days at a time. But it’s new, and entered the game late. Maybe too late? One big question is about what a huge volume of publication means for a journal and for a community. A recent editorial in the European Spine Journal despaired about the rapid influx of papers of limited utility or low quality. In nearly the same breath, the author mentioned a seminar in Asia to help authors get their papers accepted in the European Spine Journal. It seems everyone is seeking papers, even if they’re feeling conflicted about the situation. But the effect of more papers seems to be neutral at best. In the megajournals, a scan of their article-level metrics indicated that most articles aren’t being read much soon after publication, which suggests low initial awareness of their existence to relevant audiences, or very small relevant audiences, or some combination of the two. Email alerting services across the megajournals were uneven, with the Nature properties doing a better job, and PLOS ONE offering options I had a hard time finding and hesitated to use, they were so vague about what I’d receive. Given the volume of articles, even the useful email alerts I’ve received are pretty overwhelming.
Home pages are minimally curated, have different purposes. In most of the cases, home page design seemed focused more on presenting a landing page experience geared toward author marketing than a content experience geared toward readers. Large “Submit” buttons and solicitation zones dominated many of the home pages of the megajournals, with Heliyon and Scientific Communications being especially noteworthy. In Heliyon’s case, a “Submit” widget followed me around the site – which is interesting, because of all the sites, Heliyon seemed the least like a megajournal when it came to pace of publication (not as many articles being published yet). Across the set, home page content was usually presented as a reverse-chronology list. For PLOS ONE, on the day I visited (March 22, 2016), the fact that the home page article set for Recent Articles was driven by a search algorithm meant that every article on the home page was a Correction notice. Six corrections had been published at the end of the previous day, apparently. This de-emphasis of design and effort around the home page makes absolute sense, and is something I wish traditional journals would get their heads around – home page traffic as a percentage of overall traffic is falling, content discovery via the home page is a diminishing traffic driver, and editorial/design time spent on perfecting the home page can seem an exercise in vanity. Google and CrossRef and social media have really driven us into the article economy when it comes to discoverability, and home pages don’t matter as much. One other journal that received a quick look for the home page experience was SAGE Open, which had an especially stark design and implementation, with a utilitarian layout and searing white background.
Article designs aren’t as developed as you might imagine. You would think article design would be a strength of the megajournals. I didn’t see any sign of obvious techniques to improve readability or discoverability. Overall, the designs are very similar – single-column with section links for jumping around, a prominent PDF link, and some sharing tools. Then again, why would you expend resources on readability and discoverability when the business incentives are around more submissions? Even sharing tools were generally poorly implemented – a common design condition on many journal sites – and the areas in the rails (right or left) were the usual jumble of what always seems like compromises from meetings. Going to the PDF, most megajournals provided a simplified document with a single-column layout and embedded figures. Nature Communications provides a two-column PDF, which fit the more traditional feel the journal has overall. The single-column PDF is nice enough, but hearkens to the manuscript. This approach is becoming more common as production costs for 2-3 column PDFs have driven more publishers to the single-column solution, which is cheaper to automate. In the case of Heliyon, the PDF felt very much like a gussied-up author manuscript. Design at Heliyon overall (site, usability, PDF) was underwhelming. Despite a lot of talk about article-level metrics at the megajournals, these were also a bit spotty – not all that clearly available, updated somewhat idiosyncratically, and not as well-presented as those I’ve seen on some traditional journals.
Publication and production times aren’t all that quick, seem to trend toward the mean. It’s common for submission, acceptance, and publication dates to be listed. Overall, it seems that it takes each journal two weeks from acceptance to publication, but times between submission and acceptance weren’t all that quick, often months in length. Checking on this beyond sampling and a general impression would require a robust study, which is entirely possible. This is not that. But my brief review left me with the impression that these journals are taking months to accept submissions, and two weeks to produce articles from accepted manuscripts. Given what is an increasingly common and commoditized production and review environment (these journals likely outsource various functions to some common service providers), the notion of an emerging de facto standard for review and production times may not be that surprising.
More regional science and findings. One striking thing across the titles was the number of trials that have regional flavors. This was most apparent in PLOS ONE, but it existed elsewhere. Swaziland. Eastern Morocco. The Maldives. Egypt. Southeast Asia. South America. I can’t comment on the merits or value of these studies – small ecosystems and local populations can hold great interest for science and medicine – but the ratio of these seemed higher than I’ve seen elsewhere, and suggested niche scientific and regional audiences.
Subject and topical execution varies. One challenge for megajournals is categorizing their content so that it’s usable for researchers in particular fields. It’s not a challenge they’ve conquered in any innovative way, really. Their approaches seemed pretty stock, and in one case, downright puzzling. Some divide their content up better than others. Scientific Reports seems to have done about the best job of the set, allowing initial searches to be categorized easily and using tagging well across the site. Heliyon does a poor job – their content tags on articles aren’t even hyperlinked, which was the head-slapper in the group. PLOS ONE is mediocre in this regard, with decent tagging that seems a little overcooked, to a search engine that I recalled as being better a year or two ago than it is now (it seems they tried to simplify it, but instead made it into a simpleton). PLOS ONE has a feature that I doubt anybody uses – tags that lets users indicate whether the tag was useful or not.
Technical execution varies. The megajournal sites I visited varied quite a bit in obvious technical execution. Of the set, Heliyon was the laggard. The site felt like it wasn’t fully baked, and usability problems were pretty obvious (example: if you have the search window active, you can’t access the links to download the PDF or use other article-level services; ). At PLOS ONE, a persistent message of “Loading metrics information . . . “ appeared on each article as listed in summary form, as if the system were struggling to access a related database. Scientific Reports was the best of the sites, with snappy performance, clean design, strong usability, and no obviously broken features. Nature Communications wasn’t far behind.
The submission experience. I couldn’t go far into the submission experience without creating bogus credentials and uploading a bogus paper (not that this would represent entirely new territory). In some cases, calls to action for submissions were very aggressive (Scientific Reports and Heliyon). Across the megajournals, there were some noteworthy differences. PLOS ONE has a “Publish” item in its headers, and this leads to a massive dropdown menu that, once you find the “Submit” entry, leads to a landing page, which forces another click before landing on a branded Editorial Manager frontend. For Heliyon, you’re redirected to EVISE (funny name), the Elsevier submission system. Heliyon also has the widget that chases you all over the site asking for submissions. Both Nature Communications and Scientific Reports resolve to the same submission form (except for different branding), but taking the Scientific Reports path forces you through a landing page first, with the link to the submission system a little hard to find, and a more prominent “Publish >>” button that oddly makes the landing page reload instead of taking you to the submission system. (It’s also odd to call a “submission” button “publish,” but I’ll let that little bit of semantics speak for itself. PLOS ONE does the same thing.)
Impact factors. In addition to touring the sites, I looked at the impact factors for the titles that have them (PLOS ONE, Scientific Reports, and Nature Communciations). Heliyon is too new. Nature Communications, which struck me as the most like a traditional journal in its design and “feel,” has the highest impact factor (2014 = 11.470). This may be partially due to the fact that it carries the Nature brand, which has been shown to be a powerful attractions for authors and citations, as well as pointing to an impressive publishing infrastructure. Scientific Reports is an unbranded Nature journal, with an impact factor (2014) of 5.578. Both Nature megajournals reside within the www.nature.com domain, a discoverability benefit, I’m sure. As mentioned above, the Nature titles also had better email alerting facility. Since citation begins with awareness, this may be an element of their impact factor success. PLOS ONE trails the group with a 2014 impact factor of 3.234. But what truly jumps out is the vast difference in published articles between these three. For the prior two years, PLOS ONE published 54,945 citable objects (scholarly articles), while Nature Communications published 2,297, and Scientific Reports published 3,278. Together, the two Nature “megajournals” accounted for just slightly more than 10% of what PLOS ONE alone published. What is the threshold for “mega” these days? More on this later.
Odds and ends. Abstracts in PLOS ONE varied, from structured (mostly biomedical papers) to unstructured. And the structured ones varied in structure. It seemed almost as if each domain culture imposed its abstract habits on the journal. This may speak to the lack of curation and form imposed by some megajournals. SEO for all the journals seemed decent, with Nature’s titles even reconciling one of their so-bland-it’s-confusing titles properly (e.g., search for “Nature Reports,” and you get Nature Communications; but if you search for “Scientific Communications,” you don’t get Scientific Reports). Not a lot of care was taken with the tone of article titles. For instance, in a few cases, I was drawn into a negative trial because the title was written as if it could be a positive trial. While still a fan of negative trials in theory, in practice I think they have a higher bar to generate interesting results, and these didn’t seem to hit those heights. Certainly, an editor could have demanded a title that wasn’t slightly misleading. It made me wonder if some of the suggestive language authors can sneak into papers to goose their results’ apparent importance was also addressed during editing.
So, what does it all mean? After touring these sites, and thinking about why they emerged, how they fit into the overall scholarly information economy, and how that economy itself has changed in the last few years, I am left with three overriding thoughts.
What is a megajournal? We may want to rethink the use of the term “megajournal.” While these journals may have been designed to be much larger than traditional journals, some clearly are not meeting the standard. Except for PLOS ONE, they aren’t that big. For example, Nature Communications and Scientific Reports published fewer articles in 2012/13 combined (5,575) than the Proceedings of the National Academy of Sciences (7,704) or the Journal of Biological Chemistry (7,321). Does a journal have to be OA for it to be “mega”? It’s interesting to contemplate that a journal format specifically designed to be boundless has apparently only once truly achieved what the “mega” in its name suggests. We might also have to reconsider whether there is a “rise” of megajournals, or whether they are reaching the end of a growth phase in both quantity at the journal level and number of titles with megajournal characteristics overall.
Article commoditization. Whether or not there is truly more than one megajournal, the concept represents part of a strong trend toward the commoditization of articles by publishers in response to the atomization of the journal by search engines and social media. Megajournals are not the only hallmark of this trend in publishing. You see a different approach to commoditizing articles with the portfolio approach of the bigger brands, with cascading journals and so forth. Whether a brand captures thousands of papers in a system of journals (is this a “megaportfolio”?), or a single journal captures thousands of papers in one title, seems to matter less and less. The competition starts with papers, and more publishers are entering the fray, making papers more of a commodity.
Whither the reader? A lack of focus on the reading experience is a glaring Achilles’ heel of the commoditization of articles. In many fields and at many career stages, readers still respect, seek, and want a curated experience. They understand and appreciate how editors string related content together, surround important new research with editorial perspectives from respected peers, set the pace and priority in a field, and create hierarchies of content to suggest consonance or dissonance among ideas.
I’ve been speaking with scientists and physicians quite often lately. Listening to them, you realize that their relationships with journals, as readers, can be dramatically different from how we’re treating journals these days. These relationships are often more emotional, long-lasting, and essential to their identification as professionals. While they use Google and PubMed and Scholar, they often put journal brands into their searches. And many don’t search online for content. They take what they get, adopting that “if it’s important, it will find me” attitude. But are we doing enough to set content on that course?
The move into treating articles as a commodity seems mostly due to economic necessity – i.e., since price increases are off the table by and large, and the volume of papers continues to increase, the economic option that remains is to increase publication volume. So, we’re making and selling more articles. Meanwhile, readers are largely being left out of the equation. This is the hidden cost of a neglected information economy.
On the bright side, these journals are serving a community need – there are too many scientists who need to publish, and providing outlets helps solve their immediate problem. But it still feels rather cynical, as the long-term issue with what these represent – high-volume publication services for authors who value speed and convenience – is that we are losing sight of the readers, a critical connection for authors, editors, and publishers. And a critical connection for scientific advancement.
My overall view of the megajournals I looked at boils down to this:
- Not nearly as innovative with technology or UI/UX as I would have hoped
- Weak implementations of social sharing technologies (they are not alone)
- Strikingly similar to one another in some ways – home page approach, PDF design, use of author appeals, publication and production times, etc.
- Difficult to use as discovery platforms, but likely effective enough in search engines
- Not something I’d seek to read, as the relevance of the content is really uneven and unpredictable – these are not created for anyone, but represent decent services to time-starved authors
22 Thoughts on "The New(ish) Kids on the Block – Touring the Megajournals"
Science Advances is conspicuous by its absence here. Why did you leave AAAS so soon after launching it?
Science Advances is not specifically positioned as a megajournal. It fits more with the idea of an extended journal portfolio, and was described at launch similarly: “an extended forum for high-quality, peer-reviewed research.” As I note in the post, the term “megajournal” is ill-defined, so slotting something within the definition isn’t straightforward. I relied more on what I perceive to be a viable competitive set for the time I spent assembling the review.
To the other question, you may not know that I had to commute between Boston and DC every week for more than a year. It was harder than I imagined it would be. My decision had nothing to do with Science Advances. In fact, I was able to get two other journals approved by the Board (Science Robotics and Science Immunology) before I left. AAAS is an important organization. It’s just in the wrong city for me.
I think Science Advances is to Science what Nature Comms is to Nature – and as such should have been included here (though I get that it is still quite a young journal – but so is Heliyon).
I probably won’t be the only person to think that – and based on the differences between Nature Comms and other of the journals you analysed, listed by Phil in his comment, I would say that I am more than justified in my opinion.
P.S. Is the portmanteau of Scientific Reports and Nature Comms in the second point you discuss intended or a Freudian slip?
Competing interests: I am en employee of BioMed Central. The comment expresses my personal opinion though.
Kent, I’m not sure I’d call the four McMansions you selected for home evaluation all megajournals. While PLOS ONE and Scientific Reports focus on “valid” or “scientifically sound” as a criterion for acceptance, Nature Communications explicitly requires that papers “represent important advances.” Like the term “predatory publisher,” perhaps “megajournal” has turned out to not be a very useful term. -Phil
“PLOS ONE accepts original research in all scientific disciplines, including interdisciplinary research, negative results and replication studies – all vital parts of the scientific record.” (http://journals.plos.org/plosone/static/publish)
“Nature Communications will publish high-quality papers from all areas of science that represent important advances within specific scientific disciplines, but that might not necessarily have the scientific reach of papers published in Nature and the Nature research journals.” (http://www.nature.com/ncomms/about/index.html)
“Scientific Reports is an online, open access journal from the publishers of Nature. We publish scientifically valid primary research from all areas of the natural and clinical sciences.” (http://www.nature.com/srep/about)
“Heliyon is an open access journal from Elsevier that publishes robust research across all disciplines. Our team of experts ensures each paper that meets our rigorous criteria is published quickly and distributed widely.” (http://www.heliyon.com/about/)
Scientific Reports definitely meets the size criterion for ‘megajournal’. Scopus data says it published 10,856 articles in 2015 (compared to 29,346 for PLoS ONE). That’s bigger than perhaps the top 25 publishing houses. Big, in other words!
Personally, I dislike the term “megajournal” ; it sounds somewhat perjorative to me. It seems to me that the major distinguishing feature of this class is the editorial decision making process – selecting articles which are sound and leaving it at that, rather than then applying the additional criterion of impact, originality etc. We have chosen to call that “objective peer review” (though that term is also not entirely satisfactory). I think that such journals are a valuable addition to the available literature as they help to address the publication bias in the more selective journals and provide a home for more confirmatory findings, negative results etc. It so happens that such journals also seem to have very broad subject scope, but I see that as secondary. There is no particular reason you couldn’t use this editorial model on a journal with narrower scope (and hence presumably remove the “mega”-ness). In my view, the question is the other way around – whether we also still need the highly selective journals? At the present time, given the way the reward system works, we almost certainly do, but I would like to see us move more to a system of reward and research evaluation that is less based on these very highly selective journals.
I’ve been muttering about the term “mega-journal” since 2011 (see my blog here http://blogs.bmj.com/bmj/2011/03/22/liz-wager-journals-that-dare-not-speak-their-name/) but, despite it’s lack of accuracy, it seems to have stuck. I wish we could find a better name! Perhaps the Scholarly Chefs could come up with something (or run a competition for this).
If Mike Taylor couldn’t crack that nut, I’m not sure we’d be able to either:
The group of journals that use what Stuart Taylor here calls ‘objective peer review’ are useful to discuss as a whole, and the term ‘megajournals’ is the only single-word term that has been used for them. ‘PLOS-One-like journals’ is another alternative, as is ‘journals that don’t select for signficance’. The ‘mega’ part is a bit of a red herring, but the megajournal business model means there is no limit on the numbers of articles published, which gives all the journals in this category the potential to get very large, even if they aren’t yet.
I propose that we use the term megajournals to mean ‘open access, online-only journals that select papers on the basis of the science being sound not on perceived significance’. By this criterion PLOS One and Scientific Reports are megajournals but Nature Communications and Heliyon are not, as they are more selective than this. In my workshops for researchers on choosing a journal I recommend authors consider a megajournal when they want to make their results available soon and aren’t overly concerned about the journal name as a mark of prestige. I point out, however, that many of them don’t copyedit articles, something that may explain some of the variation that Kent found.
A wider survey of megajournals using some of the criteria Kent looks at here (but looking at other journal and article-level metrics, not just impact factor) would be very useful. One interesting point Kent makes is that the search and discovery functions of these journals, which become more and more important as they get bigger, are not yet adequate. The ability of readers to rate articles by how interesting they are would also help show which articles are of great interest to the community, given that this is not being done by editors before publication; this is not AFAIK being implemented by any megajournal yet.
Heliyon would actually qualify:
Heliyon publishes papers that report sound science. We ask therefore that you judge the study on technical soundness only and not on the relative advance it may provide or impact it may have on your field.
I’m interested that you separate out Heliyon from PLOS One and Scientific Reports, Anna: according to their reviewer guidelines, this is a sound science journal: http://www.heliyon.com/guidelines/guide-for-referees/.
Two weeks from acceptance to publication obviously means that no copyediting is being done. Does this distinguish megajournals from others, or is copyediting declining everywhere?
I’m not sure this is true. Vendors are working in other timezones and with more automated editing tools. Also, I’d differentiate between proofreading, copyediting, and substantive editing. There is a range there, as well. Most vendors charge more for all three, and often require a lot of training and staff continuity to achieve high-end results. So, this is more complicated than “no copyediting.” Is it proofreading only? Proofreading and light copyediting done based on availability? Proofreading, and copyediting done by dedicated copyeditors? Proofreading, dedicated copyediting, and as-available substantive editing? Or proofreading, copyediting, and substantive editing all done by dedicated teams? Based on the APCs being charged, I’d have to guess the lower end of the spectrum. But that’s not nothing.
I do think copyediting is in a state of flux as the economic squeeze shifts workers out of full-time jobs, into job shops, and increases outsourcing to markets with lower labor costs. Whether that will be better or worse long-term remains to be seen. Copyeditors have mixed experiences — some like where these changes have taken them, some don’t. Again, a complex tableau of shifting sands. I do think that quality overall is slipping as the volume ratchets up and there’s less money in the system to sustain it. That hits at multiple levels, not just the editorial production workflow.
Lets see lack of copy editing, poor searchability, a lack of curating, commoditization, etc. Seems to me that what the mega journals are doing is making money and really don’t care if science is served or not. But then again, they were not created to serve science but rather to do what they are doing.
They were created to serve authors. Journal publishers like to say that they have two customers, authors and readers, but in the traditional model, the readers were the ones paying the bills, so the product was (for better or worse) somewhat tailored to their needs…or, at least, what publishers thought fit their needs. For the journals described in this post, the paying customer is now the author, so those perceived needs have shifted. If an author needs to build up his or her CV, number of publications is what is important, so lack of curating or search tools may not be a concern.
I think it’s important to recognize that there are multiple markets being served here, and even in the case of hybrid journals, it’s usually skewed toward one or the other (I personally have yet to see a hybrid journal with an equitable mix of author pays and subscriber pays content). So calls for abolishing one or the other aren’t that productive. As long as there are differing needs in the journals world, there will be publishers in the market poised to fill those needs.
I do think this is a false dichotomy we’re playing with — that journals are either for readers or for authors. Ultimately, journals are about communicating scientific findings, so are, to me, by definition, for readers.
The concept of “author” is a role, not a group of people. Most people in academics aren’t usually in the “author” role. When it comes to information, the majority of scientists and scholars are in the “reader” role most of the time. The Venn diagram of authors and readers is actually a nested diagram, with authors living inside the reader community, as a subset. If there is a published author in science or academia who doesn’t read, show me. But I can easily show you readers who are not authors in academia and science.
I’m not saying authors are never readers; what I’m saying is that different products can cater primarily to different roles (to borrow your excellent term) while not excluding others. People read mega-journals, or they wouldn’t have impact factors, just as people write for more traditional journals, which we both agree are focused on the reader role. But it’s overly simplistic to say that the business of publishing journals is just about communicating scientific findings, because if that’s your only goal, there are far easier ways to go about it. Reputation and tenure are often on the line for authors, and that can be significant motivation for folks in that role. It should come as no surprise that someone figured out a business model to cater to these needs a little more directly. And there’s nothing wrong with that – there’s plenty of room under the journals umbrella for these folks too.
1) RSC Advances is also a megajournal (12995 papers in 2015), and it is a prelude to the rise of the “sectorial” megajournal (two more cropping up now just in chemistry), so the megajournal is far from dead, it might just be morphing into a more “targeted” beast.
2) Speed: megajournals do not scale well, at least that’s true for PLoS One, though I believe a serious study would show that others are doing a bit better. Since speed is so crucial for many researchers, there is plenty of space for another megajournal whose main characteristic is speed. In particular of peer review. F1000R has not filled this space due to a variety of reasons.
Thankyou. I would love the discussion to advance to the social sciences, where there is still considerable resistance to megajournals and indeed more resistance to OA publishing than in the sciences.
While Heliyon looks like it will take social science papers, and PLoS occasionally does, the ones I have found are:
-Palgrave Communications, Nature group (750 pounds standard APC) Charges at http://www.nature.com/openresearch/palgrave-journals/
– Cogent Social Sciences (Taylor&Francis) you can negotiate (ie, offer to pay what you can afford, they only recommend rather than insist on a fee)
– SAGE Open is cheap at under $395 US (it recently went up from $99). https://au.sagepub.com/en-gb/oce/journal/sage-open#submission-guidelines it just got into the Emerging Sources Citation Index, the lowest level of the Web of Science index.
– Wiley does not have seem to have one?
– Can’t see social science megajournals or interdisciplinary ones published by Springer .
Are there others?
As always, other options are here https://simonbatterbury.wordpress.com/2015/10/25/list-of-open-access-journals/ and only a couple would meet my critieria of a free>>$500 APC.