In the early days of my career, I worked on the marketing side of the publishing industry. The focus of my first job was identifying, assessing, and acquiring names and outlets for our marketing materials. The business of advertising and marketing is focused on a core, but relatively simple metric: the cost per thousand (CPM). This ratio describes the amount one would have to spend to reach a thousand prospects in the respective media. The relative quality, the recency, or the specificity of a particular marketing vehicle would determine the cost to use the list, print an ad, place a radio or TV spot. This cost wasn’t the entire cost of a marketing campaign, only the channel costs. The total costs would include production, distribution, processing, artwork, and other various overhead costs. You could then assess the returns of the campaign when the results were in based on how much you had spent to determine an ROI. Notionally, the added costs of the higher-value vehicle would be overcome by better results.
Marketing books today is — or perhaps better stated, should be — very different than it was some twenty-five years ago, when I was plying that trade. The primary driver of the discovery of books are recommendations from people the reader knows. Getting from that awareness to having a book into a reader’s hands increasingly involves online discovery, not traditional marketing. Searching for a book that you know about has never been easier. Even if you only know snippets of information about the item, say part of the title, some of the author’s name, a subject, or even a cover image might be sufficient to get you to the title you are seeking.
Finding things that you aren’t aware of, that you don’t know anything about, or that are tangentially related to what you do care about is much, much harder online. Serendipity on the internet has always been a challenge. You can’t browse the stacks of an online library, or peruse the shelves in an online store. That book shelved next to the one you are interested in doesn’t exist online in quite the same way. While systems to support unintended discovery do exist, unfortunately most publishers aren’t taking advantage of their capabilities. The key to how many of these systems work is in the metadata about the object.
Enriching metadata should be seen as the marketing investment of the digital age.
I have made this point repeatedly in many presentations over the past several years. As a marketer, you don’t know what you are missing when a reader doesn’t discover your books. If your metadata is absent or poorly created, you can’t quantify all of the lost sales when people virtually walk by because your product is invisible to them in the online systems they are navigating. There are several ongoing initiatives in the community to help people improve their metadata or even increase awareness of the value of metadata at all. The CrossRef initiative, Metadata2020 is one example. My own organization, NISO, and the Book Industry Study Group (BISG) regularly do educational programs and other efforts to promote metadata quality and identifiers.
There hasn’t been much publicly-available hard data that could be used to prove this case, especially to executive leadership, that companies should focus on improving metadata. Most executives, editors, and even authors are probably skeptical about investments in metadata, if they consider it at all. Metadata doesn’t look as nice as a printed ad, but digital discovery is much more likely to yield tangible sales results, even if it isn’t as eye catching.
This gap in quantifiable data on metadata’s impact is being filled by a research project that is being undertaken by Firebrand Technologies and Kadaxis, and which released interim results earlier this fall. This project is testing the impact on online sales over time of enriched metadata using keywords. The longitudinal study involved four publishers, Dover Publications, Kaplan, Yale University Press, and Andrews McMeel Publishing, each supplying twenty titles to the pilot. Those titles were then assigned a list of automatically-generated keywords and the impact on views and sales was measured over a year’s time. While the automatic generation of keywords using Kadaxis’ tool was highlighted to bring the effort to scale, the results are worth strong consideration by those in the book trade generally, regardless of how the metadata is enriched. The report outlined the selection of back-list titles with modest sales activity over the previous several years and added only new keywords to their ONIX data. Dover Publications saw the most impressive result with a roughly 20% increase in year-over-year sales from 2015 to 2016 in the twenty titles tested. Kaplan saw more mixed results, with several titles increasing views and sales, but not every title. Yale University’s titles saw consistent sales over the time period, so it wasn’t clear from the first year’s-worth of data if the keywords had an impact. The report highlighted only one of the Andrews McMeel titles, which did see a spike in sales during the test period. Presumably, if more than a single title was successful, they would have highlighted the results.
Improving metadata and keyword use isn’t a new concept. In 2014 BISG published the “Best Practices for Keywords in Metadata,” , to guide publishers on choosing effective keywords and improving keyword quality. But this hasn’t filtered into widespread practice. In 2016, Bowker analyzed keyword usage in ONIX files from more than 150,000 publishers and found that roughly 23,000 (15.3 percent) had added keywords to at least one book. Furthermore, while keyword usage had increased over the preceding ten years, the number of titles with keywords in ONIX metadata rose from about 25,000 to 114,000, in 2015, which is incredibly low given the total number of books published each year. Even within that set, the quality of those keywords is rather low, with the descriptive terms like “audiobook” being included in more than 10% of those titles. So obviously, the community has a lot of work to do when it comes to enriching the underlying metadata for books.
Overall, the results of this particular study showed a modestly positive impact of investments in improving keywords, with some standouts. This is certainly positive and when margins are already tight on monographs, especially scholarly monographs, every little bit helps. The results are not a slam dunk if one thinks of the investment from a traditional marketing perspective. However, I would argue, like the report’s authors do, that investments in metadata are not like investments in traditional marketing. Enriching the metadata associated with a product lasts in a way that traditional marketing simply does not. Enhancing metadata is an investment in marketing that will pay dividends as long as the product is available. Results will continue to be felt over time, since the product will be discoverable long after its initial release. Yes, results will vary by product, category, or community. It also makes sense too that early movers will gain more of an advantage than later adopters, as the overall quality of metadata improves across the marketplace.
Another interesting angle on this approach to marketing is how the traditional marketing success metrics get turned on their head. You aren’t measuring or even paying for the number of eyeballs that you are potentially putting your product in front of. Your investment is in the thing itself and the potential of that object to be discovered. How do you measure that visibility potential? That does make pitching the investments a bit harder to grasp conceptually than paying for potential customers in a traditional sense. In the end, though, you can measure results and see the results of not having enriched metadata and then after having made the investment. Ideally, improving metadata will be an easy sell, but reports such as this one by Firebrand and Kadaxis will make the case more strongly that investing in metadata should be a component of a publisher’s marketing strategy, not simply a production output.