A study was recently published in PLOS ONE entitled, “Differences in the Volume of Pharmaceutical Advertisements between Print General Medical Journals.” It comes from a group of mainly primary care physicians in Canada, with a couple of contributors hailing from the US and the UK.
Their conclusions are that advertising in medical journals saves subscribers a little bit of money; that Canadian journals carry more ad pages than US or UK journals; and that given the small amount of savings generated by advertising, subscribers would probably not feel much pain if advertising were eliminated, while benefiting from a better reading environment, uncluttered by commercial messages. The media has already taken the message and run with it, with little to no scrutiny.
Unfortunately, the samples, models, and assumptions in the analysis are deeply flawed, leading the authors to incorrect conclusions.
The authors sampled library copies, mostly Canadian editions, of Canadian, US, and UK general medical journals.
Sampling from library copies is a major limitation, as library copies generally carry fewer ads than other segments of the print run for many of these large-circulation journals.
Most general medical journals publish enough copies to have demographic splits (based on regions and audience characteristics). These are portions of the overall print run in which ads targeted at particular specialists in particular countries are published and sold at a premium. The “US oncology” split for a large general medical journal can be quite a bit thicker (and more lucrative) than the base copy sent to libraries, Canadian subscribers, or anyone else, for that matter.
Because the authors relied on library copies (with some sampled in their native lands, but most in Canada), they never encountered the cardiology, oncology, neurology, or infectious disease demo splits some of these journals have used for years, making their estimates of advertising pages far too low and very unreliable over time.
I compared their estimates to actual factual information readily available to anyone willing to buy it, and it took me about half a minute to see that the two major US multispecialty journals in 2011 generated three times as much advertising revenue as the authors estimated. That’s a 300% error. If only the authors had spent some money to get actual data instead of going through this elaborate and erroneous estimation process . . .
So why would Canadian journals seem to have more ads? Because most Canadian journals don’t have large enough circulations or large enough specialty audiences to justify special demographic ad splits. This means that most ads in Canadian journals run in all versions, so Canadian journals in Canadian libraries look like they have comparatively more ads than contemporaneous library copies of US and UK journals.
For their historical analysis (1970-2012), all the journal editions were Canadian editions — a much less realistic set to draw conclusions with, especially given the long timeframe.
In this historical analysis, the authors observed some steep drops in advertising pages. They attributed this decline in ad pages to an overall decline of advertising in journals across the board. For instance, they claim to have detected a steady advertising decline in one journal starting in 1990, but I know for a fact that nothing of the sort happened. Instead, I believe what they were seeing was the introduction of the ability to print the kind of ad splits I mentioned above, which, as these succeeded, slowly took ads out of the base journal and redistributed them into the demographic splits. Print advertising was actually booming for much of this period, but the authors did not put themselves in a position to see it. They may also be misinterpreting better management of the Canadian circulation and stricter enforcement of national rules around advertising — which would lead to fewer irrelevant ads making their way into Canada and Canadian libraries — as advertising declines.
Another problem with the study was that the authors counted “house ads” as journal editorial content, and not as advertising. I think their rationale for this is that the journal received no revenue for these ads, so house ads shouldn’t count in their overall calculations. But these ads could have been simply eliminated from the tally, and not added to the denominator. Doing it as they did increased the editorial page denominator unnecessarily and generated an artificially lower percentage rate of advertising across the board. I would have counted them as advertising. After all, publishers run these ads to both balance out paid pages (make even page counts) and to sell ancillary products or additional subscriptions. They are not editorial content.
So the authors have at this point severely undercounted pages for non-Canadian journals and created a larger editorial denominator by misallocating house ads. Now we get to yet another mistake — assuming the pages they’ve counted can be used to accurately calculate related revenues.
The authors tried to get ad revenue information from the journals directly, but only the CMAJ provided it. As noted above, they could have just purchased reports of these revenues. But they didn’t know this, so all other ad revenue figures were badly estimated, and their calculations of ad revenues for non-Canadian journals are many times too low.
However, unaware of their own poor sampling techniques and mistakes in calculating pages, the authors jumped to the opposite conclusion, writing:
Our estimate of advertisement revenue was higher than the reported revenue for the one journal that provided us information about journal revenue (CMAJ). . . . The discrepancy between the estimated and reported advertising revenue for the CMAJ suggests that the estimates for other journals may be higher than the actual value by as much as 40% and thus the effect of replacing revenue from advertisements with increases in subscription costs on individual subscriptions may be overestimated.
Ad pages and revenues do not have a 1:1 correlation, so even counting the pages comprehensively won’t let you directly derive accurate revenue estimates. Savvy publishers respond to declines in ad pages by increasing prices and offering other creative options. For instance, between 2011 and 2012 in the multispecialty journals, there was a 39.6% decline in pages and a 32.6% decline in revenues. The two measures — pages and revenues — have a decent relationship, but publishers do what they can to increase revenues if ad pages decline. Their 1:1 assumption created an approximate 20% exaggeration in the decline of ad revenues, further compounding their undercounting error. Added to their counting errors, it’s no wonder their revenue estimates for non-Canadian journals deviate wildly from reality.
When calculating the contribution of advertising to the value derived by readers through cost-savings, the authors failed to account for controlled circulation copies — these are the free copies delivered to physicians at no cost as part of an advertising revenue strategy. Many thousands of physicians in the US especially receive some of these journals at no cost. The offset to this individual subscription price via advertising is 100%. So, instead of saving a few cents per issue, many thousands of physicians save more than $100 per year because of advertising. Because the authors treated the entire subscription file as a homogenous group, these major variations within the file — and major beneficiaries of ad support — were not counted.
There are other problems — their subscription assessments come from simplistic assumptions about the pricing and sale of subscriptions; their understanding about how member subscriptions are funded is incomplete; the authors are clearly uninformed about how site licenses are sold; and they make an assumption that library budgets could or would be increased if publishers eschewed advertising dollars and made up for the lost revenues via libraries.
There are presentation problems in the paper. Their Table 1 is incomplete and contains vague information and simple errors. Their Figure 2 doesn’t line up with the data presented in the text pointing to it so you can’t square what they write with the figure. The authors state their estimates about advertising revenues as point data, and do not reflect the tremendous shifts that advertising revenues can experience year to year (for instance, in 2012, the last year of their study, advertising in many of these journals dropped precipitously from 2011 levels, posting declines of 35-50% — this is not reflected in the data or estimates).
In short, this study has a questionable premise and fatal flaws, and shouldn’t be quoted as a reliable source of information about print advertising, associated revenues, or business model options for journals of any kind. From a PLOS ONE peer-review perspective, I have to say that this analysis isn’t even “methodologically sound.”
This paper is also yet another example of the “armchair quarterbacking” publishers continue to see all around them — academics and others who believe that because they’re smart, they can quickly comprehend and judge the businesses and functions of journals and their publishers. Again and again, these publishing businesses turn out to be more complex, sophisticated, well-managed, and interdependent than these academics ever imagined.