Let’s suppose you’re an information provider who has decided to diversify revenue opportunities by creating platforms and tools to monetize data. Perhaps your organization has received a mandate from the Obama Administration to support the Open Government Initiative. Perhaps your strategic plan compels you to move away from text and pictures. Either way, you are facing a paradigm shift that brings with it any number of complex implications for your business. One of these will be that, almost overnight, you will need a new framework for peering into your own business because it has become exponentially more complex to track.
Say goodbye to the days of shipping a hardback in a paper wrapper, cashing a check, and heading to the pub. Today, when content is parsed and customized for dissemination in various formats, to numerous devices, and through disparate partner channels, business analytics, assessment, and competitive analysis are (or should be) the modus operandi.
For clarity, in this context “data” refers to a range of content types, not only to numeric data sets. In some cases, this involves the licensing of content and its associated tags, which will enable it to be more searchable and parsable so that it can be exported in customizable subsets via APIs. Or, it could involve such data sets accompanied by charts and text discussion.
The Dataverse Network Project, housed at the Institute of Quantitative Social Science at Harvard University, provides an open source structure for maintaining data and making it available for re-use via user-customized collections. The concept is to help maintain a future-proofed repository, providing tools that lets people select, download, and re-organize public data sets in structured environments:
The Dataverse Network Project includes integrated developments in web application software, networking, data citation standards, and statistical methods designed to put some of the universe of data and data sharing practices on firmer ground.
Whether content is linguistic, numeric, visual, or mathematical, using a standardized mechanism to disseminate data leaves you facing some key business dimensions around what you distribute:
- Evaluative information - Mechanisms for capturing information about the impact of your content once it leaves the nest, which supports standardized analysis and reporting back to business stakeholders
A growing number of SaaS companies are specializing in the creation and support of business intelligence (BI) dashboards. Companies such as RJMetrics pull data via API from Netsuite, Google Analytics, and Twitter as well as directly from relational databases like MySQL, PostgreSQL, Oracle, or SQL Server. PivotLink offers prepackaged connectors to over 70 different systems to access data or to push data to a SaaS platform via API.
The Metric System, an RJMetrics-sponsored blog, uses publicly available customer data from Twitter to analyze trends. Their data-based inferences, as of January 2010, included the following:
- The monthly rate of new user accounts peaked in July 2009 and is currently around 6.2 million new accounts per month (or 2-3 per second). This is about 20% below July 2009’s peak rate.
- A large percentage of Twitter accounts are inactive, with about 25% of accounts having no followers and about 40% of accounts having never sent a single tweet.
- About 80% of all Twitter users have tweeted fewer than ten times.
- Only about 17% of registered Twitter accounts sent a tweet in December 2009, an all-time low.
Regardless of your take on Twitter, these examples provide insight into customer behaviors that can be highly valuable to strategic business leads, marketers, and product development teams.
Measuring and assessing customer activity farther afield remains elusive, particularly in connection with content resale or licensing. In addition to crunching data emanating from customer activity at a central site or server location, information businesses will also require mechanisms for pulling bespoke usage data back to the main repository. This may include information about device-based user activity and, ideally, will draw upon standardized data APIs from channel partners and licensees.
Which brings me to the “Legal information” bullet above.
If a business is focused on having visibility into the use of its content, wherever it occurs — with the purpose of creating a feedback loop for the business — this needs to be articulated from the outset in the language of its licensing, partnership, and resale agreements. Publishers should have the foresight to include provisions in their agreements that allow them (or their specified vendor partners) to pull consistent, normalized data feeds, via standard systems or APIs, from licensee’s platforms for re-aggregation and collective analysis.
Without such mechanisms, a 360-degree view of the business becomes a 180-degree view.
As a colleague aptly summarized, “once we had readers, now we have users.” The digital publishing business is a quickly becoming a content + tools = service business. This brings with it a new requirement to more completely comprehend end-user behaviors, wants, and needs.
As a level setting, transparency has not always been the norm in publishing. Those who conduct significant business with book wholesalers or subscription agents sorely lacked information about purchasers and end users. They have had access to volume sales data, but not much else. In most cases, they have known that their customers are “libraries” — and that’s where the customer knowledge trail ends.
Without this knowledge of who their ultimate consumers are, they have lacked insight about brand authority, market share, purchasing preferences, and the relative utility of different sub-types of content.
In the realm of e-journals, there has been more reliance on analytics because impact factor and citation/linking measurements are the backbone of scholarship and tenure processes. Common DTDs and tagging and linking standards were implemented earlier for e-journals than for e-books, and this has provided a basis for comparative analysis.
There are compelling reasons for publishers to focus on analytics:
- In the library sales market, budgets are shrinking and the competition for scare dollars can be cutthroat. Content types compete with one another and with other university infrastructure and service investments. When developing products and making compelling use cases, business intelligence can provide a measurable advantage.
- Publishers are migrating from a content/product model to more varied and complex, data- and multimedia-centric service models, which involve diverse repackaging structures, device formats, and distribution channels. This trend will continue as publishers mine content repositories for new services and extend their reach to global audiences.
Given the volume of experimentation in the information industry, it’s prime time to ground your business operations by establishing extensible processes that allow for routine evaluation of trends in information output, input, and throughput — not indiscriminately, but closely tied to the particulars of our content and mission.
If you’re new to the service arena, it’s important to invest in understanding who your customers are (and will be in future), what is essential versus “nice to have,” and how behavior aligns with preferences to establish a use case.
If seeking new ways to monetize your content, it’s not too soon to peer around corners and establish internal and external expectations that foster clarity and reinforce strategic capabilities despite an increase in business complexity.
Having the capacity to centralize information from diverse platforms and channels is of greater value than ever to digital information businesses — and will set the stage for the next phase of growth.
For those interested in joining a live discussion, Ann Michael and I will be conducting a session with Chris Beckett, VP of Business Development at Atypon; Mike Sweet, CEO of Credo Reference; and Marc Segers from iFactory about digital reference monetization, networked data models, device use, and the underpinnings of a 360-degree customer view at the SSP Annual Meeting, June 2-4 in San Francisco.