Cover of the first issue of Canadian Medical A...
Cover of the first issue of Canadian Medical Association Journal. (Photo credit: Wikipedia)

A study was recently published in PLOS ONE entitled, “Differences in the Volume of Pharmaceutical Advertisements between Print General Medical Journals.” It comes from a group of mainly primary care physicians in Canada, with a couple of contributors hailing from the US and the UK.

Their conclusions are that advertising in medical journals saves subscribers a little bit of money; that Canadian journals carry more ad pages than US or UK journals; and that given the small amount of savings generated by advertising, subscribers would probably not feel much pain if advertising were eliminated, while benefiting from a better reading environment, uncluttered by commercial messages. The media has already taken the message and run with it, with little to no scrutiny.

Unfortunately, the samples, models, and assumptions in the analysis are deeply flawed, leading the authors to incorrect conclusions.

The authors sampled library copies, mostly Canadian editions, of Canadian, US, and UK general medical journals.

Sampling from library copies is a major limitation, as library copies generally carry fewer ads than other segments of the print run for many of these large-circulation journals.

Most general medical journals publish enough copies to have demographic splits (based on regions and audience characteristics). These are portions of the overall print run in which ads targeted at particular specialists in particular countries are published and sold at a premium. The “US oncology” split for a large general medical journal can be quite a bit thicker (and more lucrative) than the base copy sent to libraries, Canadian subscribers, or anyone else, for that matter.

Because the authors relied on library copies (with some sampled in their native lands, but most in Canada), they never encountered the cardiology, oncology, neurology, or infectious disease demo splits some of these journals have used for years, making their estimates of advertising pages far too low and very unreliable over time.

I compared their estimates to actual factual information readily available to anyone willing to buy it, and it took me about half a minute to see that the two major US multispecialty journals in 2011 generated three times as much advertising revenue as the authors estimated. That’s a 300% error. If only the authors had spent some money to get actual data instead of going through this elaborate and erroneous estimation process . . .

So why would Canadian journals seem to have more ads? Because most Canadian journals don’t have large enough circulations or large enough specialty audiences to justify special demographic ad splits. This means that most ads in Canadian journals run in all versions, so Canadian journals in Canadian libraries look like they have comparatively more ads than contemporaneous library copies of US and UK journals.

For their historical analysis (1970-2012), all the journal editions were Canadian editions — a much less realistic set to draw conclusions with, especially given the long timeframe.

In this historical analysis, the authors observed some steep drops in advertising pages. They attributed this decline in ad pages to an overall decline of advertising in journals across the board. For instance, they claim to have detected a steady advertising decline in one journal starting in 1990, but I know for a fact that nothing of the sort happened. Instead, I believe what they were seeing was the introduction of the ability to print the kind of ad splits I mentioned above, which, as these succeeded, slowly took ads out of the base journal and redistributed them into the demographic splits. Print advertising was actually booming for much of this period, but the authors did not put themselves in a position to see it. They may also be misinterpreting better management of the Canadian circulation and stricter enforcement of national rules around advertising — which would lead to fewer irrelevant ads making their way into Canada and Canadian libraries — as advertising declines.

Another problem with the study was that the authors counted “house ads” as journal editorial content, and not as advertising. I think their rationale for this is that the journal received no revenue for these ads, so house ads shouldn’t count in their overall calculations. But these ads could have been simply eliminated from the tally, and not added to the denominator. Doing it as they did increased the editorial page denominator unnecessarily and generated an artificially lower percentage rate of advertising across the board. I would have counted them as advertising. After all, publishers run these ads to both balance out paid pages (make even page counts) and to sell ancillary products or additional subscriptions. They are not editorial content.

So the authors have at this point severely undercounted pages for non-Canadian journals and created a larger editorial denominator by misallocating house ads. Now we get to yet another mistake — assuming the pages they’ve counted can be used to accurately calculate related revenues.

The authors tried to get ad revenue information from the journals directly, but only the CMAJ  provided it. As noted above, they could have just purchased reports of these revenues. But they didn’t know this, so all other ad revenue figures were badly estimated, and their calculations of ad revenues for non-Canadian journals are many times too low.

However, unaware of their own poor sampling techniques and mistakes in calculating pages, the authors jumped to the opposite conclusion, writing:

Our estimate of advertisement revenue was higher than the reported revenue for the one journal that provided us information about journal revenue (CMAJ). . . . The discrepancy between the estimated and reported advertising revenue for the CMAJ suggests that the estimates for other journals may be higher than the actual value by as much as 40% and thus the effect of replacing revenue from advertisements with increases in subscription costs on individual subscriptions may be overestimated.

Ad pages and revenues do not have a 1:1 correlation, so even counting the pages comprehensively won’t let you directly derive accurate revenue estimates. Savvy publishers respond to declines in ad pages by increasing prices and offering other creative options. For instance, between 2011 and 2012 in the multispecialty journals, there was a 39.6% decline in pages and a 32.6% decline in revenues. The two measures — pages and revenues — have a decent relationship, but publishers do what they can to increase revenues if ad pages decline. Their 1:1 assumption created an approximate 20% exaggeration in the decline of ad revenues, further compounding their undercounting error. Added to their counting errors, it’s no wonder their revenue estimates for non-Canadian journals deviate wildly from reality.

When calculating the contribution of advertising to the value derived by readers through cost-savings, the authors failed to account for controlled circulation copies — these are the free copies delivered to physicians at no cost as part of an advertising revenue strategy. Many thousands of physicians in the US especially receive some of these journals at no cost. The offset to this individual subscription price via advertising is 100%. So, instead of saving a few cents per issue, many thousands of physicians save more than $100 per year because of advertising. Because the authors treated the entire subscription file as a homogenous group, these major variations within the file — and major beneficiaries of ad support — were not counted.

There are other problems — their subscription assessments come from simplistic assumptions about the pricing and sale of subscriptions; their understanding about how member subscriptions are funded is incomplete; the authors are clearly uninformed about how site licenses are sold; and they make an assumption that library budgets could or would be increased if publishers eschewed advertising dollars and made up for the lost revenues via libraries.

There are presentation problems in the paper. Their Table 1 is incomplete and contains vague information and simple errors. Their Figure 2 doesn’t line up with the data presented in the text pointing to it so you can’t square what they write with the figure. The authors state their estimates about advertising revenues as point data, and do not reflect the tremendous shifts that advertising revenues can experience year to year (for instance, in 2012, the last year of their study, advertising in many of these journals dropped precipitously from 2011 levels, posting declines of 35-50% — this is not reflected in the data or estimates).

In short, this study has a questionable premise and fatal flaws, and shouldn’t be quoted as a reliable source of information about print advertising, associated revenues, or business model options for journals of any kind. From a PLOS ONE peer-review perspective, I have to say that this analysis isn’t even “methodologically sound.”

This paper is also yet another example of the “armchair quarterbacking” publishers continue to see all around them — academics and others who believe that because they’re smart, they can quickly comprehend and judge the businesses and functions of journals and their publishers. Again and again, these publishing businesses turn out to be more complex, sophisticated, well-managed, and interdependent than these academics ever imagined.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

24 Thoughts on "Not As Advertised — Why an Academic Analysis of Medical Journal Advertising Is Fatally Flawed"

I would not say their study is “fatally flawed,” just that it is too crude to justify their overly strong conclusions. It reminds me of psychology studies using the local students that include claims of universality. If these folks had been content to simply report their findings it might have been an interesting piece of work, albeit limited in the ways you note. Every study is limited. Instead it sounds like they had an axe to grind and so they sunk to advocacy. Holding an axe makes you sink faster.

Sorry, but to me a study that has poor design leading to shoddy methodology leading to mismeasures leading to wrong conclusions is fatally flawed. You can’t get anything else wrong, really.

Well, you have to admit that the spelling and grammar were generally fine.

Sorry but I think you are completely wrong. Your primary complaint is that they used a convenience sample, Canadian library copies, instead of a more representative sample. My point is that this is a standard practice not a bad design as you claim. Your points would be useful in designing more refined studies but they are cernot fatal flaws. Note too that “fatal flaw” is standard polemical rhetoric, not scientific language.

You are right and this points to the dangers of using convenience samples. The fatal flaw however in my view is that the academic editor and reviewers appear to not have had the basic subject knowledge to properly review the article missing how biased a sample it was and rejecting the paper as should have been done.

As David Crotty correctly notes below it is a real challenge to make sure the editor and reviewers have all the competencies needed to adequately review a paper with the very broad scope of a megajournal like PLOS ONE. Hopefully the PLOS editorial staff will learn from their mistake. At the same time you can find all sorts of other breakdowns of the peer review system in more narrowly focused journals.

I would not have rejected the paper, merely told the authors to tone down the universality of their conclusions. Convenience samples are essential to science because the alternative is expensive, sometimes impossible. If you get an interesting result then this raises the important question of how the general population behaves? This is just what has happened here so by that standard this is an important study, not a flawed one. The criticism seems like a case of the perfect being the enemy of the good.

I take it no one questions that their results are valid for Canadian library journals. One then wonders for what other populations they are valid? I for one had no idea that journal advertising worked this way and am glad to learn it. The libraries may be as well.

When one uses a convenience sample, one must either limit one’s conclusions strictly to that sample or do proper controls and provide evidence that it represents the greater population. I see no evidence of either in this paper.

But you’re only addressing one of the flaws in the paper. It has already been demonstrated (see Kent’s link to a comment left by BMJ) that much of the data used is significantly incorrect. It is also clear that uninformed assumptions were made (advertisers pay exactly what is on the rate card as but one example) and unsupported conclusions were drawn (that funds are available from researchers and libraries to replace advertising revenue).

Do you seriously contend that there are valuable conclusions to be drawn from this study or are you just taking a contrary position for the sake of argument?

As one who does the science of scientific communication I think they have stumbled onto an interesting new field, namely the demographics of journal advertising. I have said from the beginning that their conclusions are over reaching but the work looks important. You might consider my issue tree model of science in this context. Important work raises important questions.

I also now know why you regard my unbuilt model of the adverse impact of OA as flawed. You regard crudeness as a flaw. That is not how science works. It often starts with crude results.

This is not, by any stretch of the imagination, a new field. Publishers have intently studied the marketing that surrounds their journals for decades if not centuries. There are academic research programs that look at marketing, researchers who specifically study the medical marketing efforts covered in this paper, journals specifically dedicated to this field (http://mmj.sagepub.com/ or https://www.emeraldinsight.com/products/journals/journals.htm?id=IJPHM) and books written on the subject (http://www.amazon.com/s/ref=nb_sb_ss_c_0_17?url=search-alias%3Dstripbooks&field-keywords=medical%20marketing&sprefix=medical+marketing%2Caps%2C461). Because something is new to you does not mean it has suddenly come into existence.

And while science often starts with crude results, those results should at least reflect reality, and it’s probably not wise to start drawing elaborate conclusions on them until they are further refined.

Just to be clear, this was not a convenience sample. For physicians, a convenience sample would be more like, “The journals I got in the mail in a month” or “The journals lying around in my waiting room.” These authors attempted to adjust for confounders, went to libraries not common to all of them, etc. It was not a convenience or accidental sample. It was purposely designed, and badly designed.

I am not arguing for the perfect, merely the competent. The data they were after are available in a validated, third-party form on the open market. They could probably have acquired it for less than the APC for PLOS ONE cost them, and their study would have been much more valid (and probably reached a very different conclusion). There was no need to estimate any of these data. They can be directly acquired. (The fact that they attempted to get direct data by emailing the journals also speaks to this not being a convenience sample.)

A decent study would have used actual data and expert peer-reviewers, at the least.

You would not have rejected it, yet you also admit that you had no idea that journal advertising worked this way. I’ll bet that’s precisely how it got through peer-review. (Also, merely reading this paper also does not mean you know how journal advertising works.) PLOS ONE has a real problem — boundless scope inbound, and limited scope in its reviewer pool. Matching papers like this with available expert reviewers is probably not possible for them.

This is a shoddy study with a poor design and mistaken conclusions.

I think this paper speaks to the challenge of the megajournal publishing model. If your megajournal aims to be a bucket to collect everything, you’re going to have to deal with an extraordinarily broad range of subjects. As these spiral out into more and more niche areas, it’s harder and harder to bring in the editorial expertise needed to give papers a fair review.

PLOS ONE’s scope is described as follows: “PLOS ONE features reports of original research from all disciplines within science and medicine.”

But this paper would seem more a piece of business research, looking into publishing and advertising business models. Is there a point where a paper would be considered out of scope for PLOS ONE?

Also, it’s perhaps worth noting that PLOS has a policy disallowing the types of ads discussed in this paper from PLOS Medicine: “PLOS Medicine does not accept advertising for pharmaceutical products, medical devices or tobacco products.” One then has to look at this paper with questions of confirmation bias (https://en.wikipedia.org/wiki/Confirmation_bias). A good editor will recognize this is the case and subject the article to an even higher level of scrutiny than usual just to rule this out. Judging from what’s reported here though, that may not have been the case.

Kent,

Suggest you put a “humour” metatag on this one.

“Clinicians may prefer to avoid being exposed to pharmaceutical advertisements that are of questionable value by paying more for the edited content of general medical journals.”

Really? Really? I can see publishers updating their websites now:

NEW! Subscription Option 14: Receive an “advertising free” print subscription at a multi digit % surcharge over the advertising version.

The authors are primarily from research institutes – perhaps interviewing experts from agencies and publishing companies would’ve resulted in some balance to this hatchet job. Advertising “rate cards” are often only the “guides” and many off-card deals are done – particularly with combo packages between digital and print.

Missing from the study are mentions of the decline in print advertising in the USA due to the advent of DTC TV advertising (something not permitted in Canada) and the enormous drain on marketing budgets in producing TV spots rather than print ads.

Looks like the Canadian journals intersperse ads throughout editorial content and stuff PIs at the back – from my perspective, that’s just not good publishing practice for our physician audience – but to ask docs to “pay more” is really quite hysterical.

Thanks for the laugh this morning.

For the past 5 years I have worked closely with a number of major medical publishers and while there was a major drop in advertising that happened suddenly, advertising is still a significant revenue source. There were some publishers where advertising revenue was actually more than subscription revenue. The drop in advertising revenue has been made up by site license revenue. This study gives a very incorrect view of advertising.

I laughed out loud when I heard about this paper. Without even seeing the methods it’s obvious they were undercounting ad pages. With the caveat that I have not read it, nor do I intend to, I’d like to ask the authors whether they subscribe to these journals.

If a conclusion is that journals could just raise the subscription prices a to make up for the advertising dollars, would these [mostly] gentlemen actually be willing to put their wallet where their words are?

So, will we hear anything from the leaders of PLOS One about how such a shoddy paper could have gotten through its peer-review process? This doesn’t leave one with a great deal of confidence that PLOS One’s review procedures weed anything out!

As CEO of BMJ, may I add: I’ve no doubt the authors did try to contact us for data as they said, but clearly they didn’t ask the right person – I’m not suggesting that’s their fault. We’re happy to share, q.e.d.: transparency is one of our values – and thanks for the link.
Over and above the fact that this paper is a case of ‘garbage in, garbage out’, what also struck me personally was a failure to acknowledge the role advertising might play, over and above that of underwriting journals.
We would champion the right of legitimate businesses offering goods or services to healthcare professionals to advertise the same.
Our policy is that we will carry advertising for any product or service, except tobacco companies, providing those ads comply with regulatory standards and are – as the UK Advertising Standards Authority test puts it – ‘legal, decent, honest and truthful’. (Sad to say a number of ads we are asked to carry – including from pharma clients – do not meet these standards).
We would not prevent – for example – the National Health Service from advertising a salaried medical vacancy to our readers. We would not prevent a medical defence union advertising its services. We would not prevent (not, sadly, that it is much of an issue) Mercedes advertising its vehicles to our readers. I do not see, logically, why we should impose a ban on an entire category of advertiser, which actually is of high interest and use to our readers. That is, unless your logic precludes advertising as an activity per se – in which case perhaps, like the Beach Boys, you ‘just wasn’t made for these times’.

It does seem oddly out of step in an era where library and research budgets are strained to the breaking point. Anything ethical that we can do to reduce the financial burden on libraries and researchers should be welcome. This may be the first time I’ve ever seen anyone advocating for higher journal prices.

A side comment: if advertising revenue is thrice as much as estimated by the authors, that’s a 200% (or ~67%) mistake, not a 300% one. Just think about what a 100% mistake would be…

Comments are closed.