It’s becoming increasingly clear that two new and major phenomena now define the modern information age — the ubiquity and utility of software, and what Brian Solis of the Altimeter Group has dubbed “the audience of audiences.”
Publishing’s shift from the world of physical goods to the world of software has followed a staircase up through production systems to consumer systems, gradually hiking step-by-step up technology platforms that were at first expensive (and therefore required the capital investments of publishers to flourish) and eventually became inexpensive and commonly accessible (e.g., you can buy a wireless “books as software” reading device for less than $200). First came digital typesetting, then digital layout and design, and now we have full digital workflows, from authors submitting manuscripts electronically to peer-review systems that are entirely built on software to publishing systems that rely on software and are mostly realized through software (Web browsers, iPads, and the like). The time most books and journals spend in physical form prior to being printed is minutes now, rather than days or even hours.
This journey toward a world dominated by software was captured wonderfully in a recent essay by Mark Andreesen in the Wall Street Journal. Andreesen, the co-developer of the first visual Web browser (Mosaic) and co-founder of Netscape — to go back to the roots of a career that has sprouted many more software and business branches over the years — spends a little time talking about the future and “Why Software Is Eating the World“:
Six decades into the computer revolution, four decades since the invention of the microprocessor, and two decades into the rise of the modern Internet, all of the technology required to transform industries through software finally works and can be widely delivered at global scale.
But the bulk of Andreesen’s essay reflect how software has already consumed many businesses — from Blockbuster (eaten by a software company called Netflix) to Borders (eaten by a software company called Amazon) to Disney (afraid of being eaten by a software company called Pixar). And entire industries are being changed by software players — LinkedIn is changing recruitment; iTunes, Pandora, and Spotify are changing music; Google has changed advertising and marketing; CraigsList and eBay changed sales; Expedia and others changed travel; and PayPal has changed commerce. Areas of life we might view as sheltered from these effect are being transformed, Andreesen notes:
Even national defense is increasingly software-based. The modern combat soldier is embedded in a web of software that provides intelligence, communications, logistics and weapons guidance. Software-powered drones launch airstrikes without putting human pilots at risk. Intelligence agencies do large-scale data mining with software to uncover and track potential terrorist plots.
Software used across vast networks unleashes certain potentialities, many of which are now daily realities — Facebook, Twitter, Google, and so on. Yet it still feels a little surprising to find that Chief of Naval Operations, Admiral Gary Roughead, delivering a speech earlier this summer which anyone in the information business should pay attention to.
It’s been known for years that the military has been one of the earliest and best adopters of digital information systems — after all, DARPA developed the Internet, so one would assume there’d be a first-mover effect. Military blogging, data visualization, distributed software systems, software security, and cyberwar have all been tackled by the military. Yet such a disciplined and macho culture doesn’t seem the place to see these things flourishing, not to mention to see true visionaries at the highest rank.
Roughead’s talk provides a crucial framing perspective for thinking about the social and audience implications of a software-connected (aka, networked) world, epitomized by his quotation of Solis above to introduce the concept of the “audience of audiences” — that is, we are no longer publishing to isolated individuals but to connected individuals, most of whom have audiences of their own. Even in organizations as hierarchical and authority-driven as the military, this has to be acknowledged:
I submit to you that in today’s media environment, as leaders – whether we recognize it or not – we are no longer simply leading a workforce of employees or, in my case, Sailors. We are leading a workforce of communicators.
Roughead talks at length about the cost:benefit discussions the Navy engaged in about whether to adopt social media or to create a very high-walled garden for the Navy. The decision point Roughead and his fellow leaders reached will sound very familiar to anyone who has had long, probing discussions along these lines:
We made the decision to engage in social media, as many of your own organizations did, when we recognized that whether we participated or not, there was going to be an ongoing conversation about the Navy and we were not content to be absent from that conversation. . . . Leaders have to lead by example and be part of engaging a wide array of audiences, and they must approach it with eagerness – not defensiveness or trepidation. The key to success as a leader is to recognize that there is an opportunity – indeed an obligation – to listen to your people, to add another dimension to your awareness.
Taking these risks have paid off, Roughead asserts, relating a compelling story about how he was able to monitor a flood in Tennessee that affected a core Navy facility:
The power of this expanded ability to listen to my Sailors and their families first struck me here last year on our own soil, when I saw it firsthand following a massive flood that we had at our personnel headquarters in Millington, Tennessee, where all of our “Human Resources” activity takes place. All of our personnel systems were down – they were hard down – and the Sailors and families who lived on the base had to evacuate in the span of about two hours because of the rate of the flood rise. I was getting very good information from the official reporting channels that we had perfected over the years from the chain of command. But it was on the weekend. I was at home. Simply sitting at my desk, in my office at home, I went on to the command’s Facebook page in Millington, Tenn. I can tell you, for me that was the “A-ha” moment. It was as if I was there. I could see in near real-time the concerns of the people who were affected by the flood and how the command was helping. A family member would post a comment that they had left behind important medication, and shortly thereafter I‟d see a doctor come up on the page and tell them, “Meet me in the gymnasium in 20 minutes, and I‟ll have what you need.” People were asking questions, they were getting them answered, and you could see their anxiety ratchet down as this conversation was taking place.
Yet, it seems as if major sections of scholarly publishing and academic intellectual world is immune to these powerful forces, moreso than any other major industry. We’ve written here how culture trumps technology, at least as far as a five-year study of academic life showed in 2010. But is it implacably true that culture trumps technology? Is what exists now truly our ultimate state? Or is it a sign of passive, academic acceptance of what “survey says” and the status quo? After all, if the military can be transformed, and if similar technologies are percolating under the surface of academia, are you willing to argue that culture can also trump reality? Or that ingrained and diffuse culture is more resistant to change than concentrated hierarchy?
We even have trouble measuring things of value, and cling to outdated and outmoded systems — or worse, embed them even more deeply into how we execute our culture. Earlier this year, I wrote about an article Malcolm Gladwell published in the New Yorker about college ranking systems, and extended his analysis to include the impact factor, writing:
. . . this is all another reminder of how odd it is that highly educated and educationally ambitious people seem to seek clarity through numbers that, when you pull back the veil, are very poor proxies of quality, predictors of value, or estimates of differentiation.
Andreesen touches on this point in his essay when he mentions how relatively undervalued software businesses are in the stock market, largely because the mindset of the old guard is still not in sync with reality:
Today’s stock market actually hates technology, as shown by all-time low price/earnings ratios for major public technology companies. Apple, for example, has a P/E ratio of around 15.2—about the same as the broader stock market, despite Apple’s immense profitability and dominant market position . . . too much of the debate is still around financial valuation, as opposed to the underlying intrinsic value of the best of Silicon Valley’s new companies.
You could claim that the culture of Wall Street trumps the value of Apple, but is that going to lead to you making a wise investment? The same could be said for scholarly publishing and academia. One way to frame the answer is that culture trumps technology. Another is that established institutions (incumbents) are deaf to real change, simplistic to the point of innumeracy, and change too late to take advantage of emerging preferences when they’re still fresh. After all, as Roughead notes at the close of his wise and compelling talk, it’s people who are the agents of all the things we care about, and when they are led toward a better way of operating, a better way of accomplishing their work, a more effective way of sharing information . . . well, I’ll turn it over to him:
Many of our organizations have focused on leaders as communicators. Now, we have the chance to be leaders of communicators. If we recognize the opportunities inherent in this reality, we will be more effective as leaders . . . our organization will more skillfully inform . . . and our people will be the key to our communication success, just as they are the key to our success in all things.
Discussion
28 Thoughts on "Software and the Audience of Audiences — Is Academic Passivity Inhibiting Cultural Change?"
Kent, this is all very interesting, but it is hard to see a point here. It sounds like you are frustrated because some unspecified vision of yours is not being realized. I am pretty sure that academia in general, and scholarly publishing in particular, use many forms of digital communication. What am I missing?
We think of our audience as the individual researcher, when in fact it’s individuals with audiences of their own. If we published in that manner, and allowed feedback more robustly, we might have a better communications environment. We’re all about dissemination, not engagement, while the audience (with audiences of their own) is about engagement. That’s a big gap in thinking.
Many academics accept that culture trumps technology (or change, or innovation, or new ideas), when in fact there are many signs that innovation, change, new ideas, and technology (in this case, software) trumps culture when there is leadership. My complaint is that there’s a passivity in academia that deflects leadership. But is that “culture” in the sense we seek? Don’t we strive for something greater?
Yes, the tone is probably a little frustrated, but my hope is that it spurs some thinking about how much better things could be if only we’d embrace change and new opportunities.
I’m not sure the idea of audience as communicators is anything new for academic publishing. If you’re publishing a textbook, you’re creating material for a communicator (teacher) to broadcast to an audience (students). Journal articles have always been communicated to audiences through lecture courses, academic meetings, journal clubs, lab meetings, discussions among colleagues, etc. Yes, new technologies do create new avenues for communication, but the basic concept isn’t all that different.
As for the culture/technology argument, I’m not sure how easy it is to generalize. If a new technology is incredibly useful, it can certainly change the culture of a group. If a technology has no use for a given culture, then that culture will ignore the technology. Compare the way something like email radically altered the culture of science where something like the Nature Network didn’t.
Needs change as situations change. The ad-hoc network of Twitter users at a meeting may not remain all that helpful to one another after the meeting is over. The Facebook page for the military facility described above was incredibly helpful for Roughead during the flooding crisis. How useful a tool is it for him on days when there’s no flood? Do you think he really spends hours reading the posts there on an average day?
One other thing to think about is that research has become an incredibly competitive activity, and if anything, it continues to grow more and more competitive as we proceed. There are more and more candidates for fewer and fewer jobs and funding gets tighter and tighter. The pressures are tremendous and only getting worse. Can you think of other highly competitive professions that share this level of information with one another? Do NFL coaches post their plays and strategies on their Facebook pages? If anything, academia is already remarkably open and communicative given the career pressures in place.
To your last point, once something is published, sharing is the norm (as you note), but I don’t think we actually use software to create content that is easily shared in ways people want to share it. The NFL may not share coaches’ plays, but fantasy football is flourishing, full of stats, and enables (nay, depends on) sharing. Academic sharing tends to be local (geographically isolated). There are boundaries to protect, but there are ways to protect them while engaging the audience using software and their incredible audience-laddering capabilities.
Do I think Roughead spends hours reading the social media tools every day? No. But I’ll bet his Sailors use them every day, and I’ll be they’re useful every day.
The pdf is readily sent around, html versions of papers are copied and pasted, and linked to. Nearly every journal out there has some version of social media sharing buttons on every article for sending it to Twitter, Mendeley, Facebook, etc. Most journals let you download figures directly as slides for talks. What sorts of content sharing technologies are you suggesting beyond this? What are the ways that the material is being shared that isn’t being addressed?
On the other stuff, yes, that’s basically my point–each culture has its own needs. The culture of NFL coaches trumps technology that doesn’t work for their cultural needs. Those same technologies work great for fans and have trumped previous cultures creating the new and very popular Fantasy Leagues. The sailors’ needs are met by Facebook, the Admiral’s are not, except under circumstances where they are. That’s why it’s difficult to make blanket statements about one always trumping the other.
The PDF was designed to mimic print, not to be shared. Email and bandwidth made that possible. HTML copy/paste can be a nightmare, and links aren’t used in scholarly communications nearly as much as they could/should be, nor are links counted as citations. Social sharing buttons help, but often they’re implemented as an afterthought and not embedded in PDFs or followed up with counts (most shared?). Downloading figures in slides is great, but local sharing usually, not collaborative slide libraries.
I think we’re seeing the possibilities impinging on the culture, but leadership from within the culture to change things would reveal more possibilities.
The scarcest commodity is time, and that is why more extensive feedback networks like open peer review may never succeed to the extent that you appear to favor, Kent.
For all the advantages of technology, of course, there are corresponding disadvantages. I understand now that hackers can unlock the doors of your car and start the engine through your GPS system. Autos have become so computerized that the average Joe can no longer do car repairs at home.
You mention peer-review systems having been computerized. I’m aware, of course, that they have become common in journal publishing. I’m not aware that they are being used extensively in book publishing. Most editors and their assistants just use basic off-the-shelf software like Excel for this purpose. Do you know of a counterpart to, say, Editorial Manager for book publishing?
Open peer-review is something I don’t necessarily favor, at least as I think you’re using the term. But I’m really glad you brought this up, because it provides wonderful examples of what I’m trying to get at here. Peer-review happens in many layers, many of which we assume have to exist informally and out-of-sight. We used to think this about friendships, acquaintanceships, and business networks. Now, you can see social, acquaintance, and business networks all around you. As to peer-review, online commenting, comments on Facebook, RTs on Twitter, and so forth — all provide feedback that journals and books aren’t accustomed to acquiring, processing, and representing to their readers or users. Now you can see many discussions and opinions that were previously hidden, and at least point people to them, thanks to software and the “audience of audiences.” But do we do that? Not very well, if at all. Why not? Because we view peer-review as an internal process and have a hard time accepting the very real post-publication process, not to mention capturing and integrating that now that more of it is apparent, we’re not taking advantage of the software and “audience of audiences.” Great tangent!
There are many publishers making great efforts toward tracking and integrating post-publication peer review, so I think your characterization is perhaps no longer accurate. Even those publishers making no efforts themselves are aware of the concept and are eagerly watching to see what can be done.
Actually capturing those threads though, has proven extremely difficult. Much of this is due to the decentralized nature of the internet. Discussions happen through an enormous number of channels and I’m not sure that technology yet exists that lets one point a reader to every single mention of a paper that happens in every possible online forum.
I’d also argue that not much progress has been made, not because publishers think that peer review is an “internal process”, but because researchers feel it’s a “private” or perhaps “trusted” process. Much of the discussion that takes place does so through private, trusted channels, in ways that don’t leave a public, traceable record. I’m not sure journals have much hope in capturing these private conversations and presenting them to readers. Some things have not transitioned to “social” and in this case, there are deliberate efforts to keep things from doing so.
Can you give me an example of what you’re citing as “great efforts toward tracking and integrating post-publication peer review”?
Leadership around peer-review could harness some of the items I just now mentioned in a reply to David Wojick. Why not put a Twitter hashtag on each article so that you can track tweets about it? Why couldn’t an academic leader say, to heck with it, if you share an article and a million people (documented) encounter your sharing, you have an impact that deserves some new form of academic credit? What if running a blog that attracted thousands of links from reputable sites and other academics earned you academic prestige? A recent study of economics blogs found that blogging there increased blogger and institutional reputation, and also seems likely to affect policy. Where is the academic leadership willing to acknowledge this new form on a CV?
Well, perhaps it depends on your definition of “great”, but publishers like PLoS are doing a lot of experimentation–tracking blog mentions of papers as one example. These efforts are worth watching, but they’re not quite as simple or obvious as one would like to think.
One problem with their efforts, and with your hashtag suggestion is that it requires those discussing the paper to willfully make the effort to have their discussion tracked by you. How many times have you been to a meeting where halfway through it, people realized that there are multiple hashtags in use for that meeting? I can’t remember a meeting where this didn’t happen. Now multiply that by the number of papers published each year. For tracking blog usage, the blogger has to jump through the hoops of connecting to the actual paper through researchblogging.org, a step that many can’t be bothered to make.
As I said in the comment above, you can try to track things, but you’re still going to miss a lot. And more importantly, as said above, much of this discussion is not meant to be shared. If I call an important scientist an idiot, I probably don’t want that recorded as part of the reaction to that scientist’s paper. It’s an offhand comment I’d make to a colleague, and not something I’d want to come into play when that scientist was reviewing my grant application.
And as others have already commented, much of what you’re talking about measures quantity of reaction, not quality of impact. There’s a big difference between something being of high quality and something being popular (unless one assumes Justin Bieber is likely to be remembered as the most influential musician of our times). I can write a blog with funny pictures of cats, and then include a note about a research paper. It gets read 1 million times. Does that make it an important paper?
How do you decide what’s a “reputable site”? I’ve read science blogs by academics that are anti-religious polemics or ridiculous childish attempts to look cool. Do these count? Who gets to make that call? How much serious analysis can one do of a paper in 140 characters? Should a tweet count the same as a well-written review article?
The arsenic life paper probably received more blog and Twitter coverage than any other paper in the last year. Does that make it a high quality and important paper, or one that’s fun and easy to discuss? Should scientists choose to only focus on topics that are sexy and likely to be noticed on the internet, or topics that answer important questions?
What exactly are we trying to measure here?
Problematic though the Impact Factor is, at least it’s based on citation, which requires further work to be done based on the cited paper (or at least a review of a field noting that the paper is important). It has a much higher hurdle to clear as far as counting. I can start 100 Twitter accounts tomorrow and tweet out the magnificence of my last published paper. Will that make me an important scientist?
Citation is something people do ritualistically, based on information from a title or abstract, and many times the “most cited” paper for a journal is the most notorious, discredited, and salacious paper ever published by said journal. If I run a high-impact social hub in a specialty or domain, and I influence thinking through my words, selections, and contributions, do I need an article and citations with feet of clay to get academic credit? Today, yes. Tomorrow? Maybe not.
I think you’re tarring citation with a mighty big brush. Are you claiming that all citation is meaningless and fraudulent? Why does your journal even bother with citations in its articles then?
How do you prove your social hub is “high impact”? How do you prove your “influence”?
If I’m funding research, do I want to give money to someone who does important work that may cure a disease, or to someone who’s really popular and tells everyone about lots of cool stuff they should read?
I’m just saying that if you’re tarring social media with a big brush with a few bristles, you can do the same with citation. It’s not infallible, and in fact has lots of inherent problems. A new thing that might have the same problems isn’t necessarily worse.
If you’re funding research, you might also fund information synthesis. There’s plenty of funding for review articles and systematic reviews. Why not a systematic blog?
I made the point about time as a scarce resource because even if scholars do engage in informal peer review of various kinds, I am sure they are very selective about doing so, concentrating on just what is of most interest to them at the moment. The idea that there can be successful general open peer review, I think, runs smack up against the opportunity cost involved in scholars spending time on that activity when they can gain more rewards from using it on their own research and writing.
I agree with that. But in a networked world, one engineered with software and filled with audiences, scarce time becomes aggregate time in a way, and thousands of 1-minute reactions can indicate whether a paper is going to rise or fall. Twitter can already predict stock market fluctuations, weekend box office, and other things — and all you have to do is know how to watch it.
Kent, I have several problems with your vision. First, it is starting to sound like Nelson’s “link everyone to everyone” in the early days of the Web. It sounds like you want publishers to collect and display everything that anyone says online about a paper. That is an experiment someone might try, not an obvious end state. Moreover, if anyone did it it, it would probably be an aggregator like me, not a publisher.
Second, you keep talking about leadership. The academic community does no have leaders as a whole. Nor for that matter does scholarly publishing. This is not the Navy. So what do you mean?
But above all, I question this myth about culture and technology, as if those were two adversaries. Macroscopically, change happens when it works, usually slowly because there is a lot to change. Resistance happens when it does not work. If it really works then change happens relatively quickly. There may not be a Facebook for scientific publishing, or if there is we have not found it. In any case, it is a matter of experimentation, not top down leadership.
But leadership can also happen by modeling the best behavior or desired traits. Is the head of the Navy not leading outside the Navy by modeling interesting new opportunities and sharing his insights and stories? I’m not talking command-and-control stuff, but leadership in the softer, inspirational, experimental mode.
There may not be a “Facebook for scientific publishing,” but there is a Facebook upon which scientific publishing is shared, discussed, and criticized. There is a Twitter on which similar things happen. Some of the people sharing, discussing, and criticizing it on those venues and others do a good job of it, have valid opinions, and amplify materials to new audiences with their curation and broadcast abilities.
If we want this to occur, why not put a #jbc2011.1043 hash tag on the article this year on page 1043, and ask people who tweet about it to use it? And then run a listing of related tweets next to the article as they accumulate? And count the audience our audience created for us because they are an audience of audiences?
As for academic leadership, what if Harvard started counting tweets that generated more than 1 million aggregate audience (via retweeting and social sharing) as publications? What if MIT created a new impact metric called the LinkFactor, which combined social, hypertext, and traditional linking to count the entirety of a publication’s impact in the networked world?
Audience of audiences, proliferation of software.
But isn’t such a LinkFactor subject to the same criticism of other quantitative metrics, that it simply measures quantity, not quality?
Kent, starting a new thread here as we’ve run out of space above.
I think what this boils down to is the concept of “doing science” versus the concept of “talking about science”, and where one’s priorities fall.
If you’re equating citation with social media, then you should note that researchers don’t expect to be rewarded for citing other papers. One doesn’t receive career credit for putting particularly germane references in one’s papers. If the two are equivalent, then why should one be rewarded for social media but not for citing other papers?
If you’re running a funding body dedicated to wiping out a particular disease, do you want to give your funds to someone doing research that will help understand and cure that disease, or to someone who does a good job talking about current trends in research on the disease? Yes, discussion and communication does have value, but how does that compare with doing the actual research? If you’re running the American Heart Association, are you going to give a 5 year, $5 million grant to someone for running a blog about heart disease? Probably not, though you could see a funding body spending a few thousand dollars here and there on such things. So at best, the rewards offered for talkers are going to be meager as compared with doers.
If you’re a university, are you going to offer tenure to the researcher who brings in millions of dollars in grants, comes up with patentable products and cures diseases, or to the professor who does a good job telling people about the work researchers at other universities are doing? Given the low priority that teaching is given at most research institutions, the answer to that is fairly obvious. Being a good teacher is rewarded, but at nowhere near the same level as running a strong research program.
But researchers are rewarded for publishing papers that cite other articles (scholarly form) but not for writing thoughtful reviews, interpretations, or analysis that cite other articles (not scholarly form). And those citations (links) don’t count toward the impact of the publishing venue, so authors in those venues don’t get credit for highly cited (high impact) articles. Concrete example: As a joke, I thought it might be interesting to calculate the impact factor of this blog. I haven’t gone through the trouble, but I’ll bet that if link = citation, and you accept that this is about equivalent to a review journal in the social sciences, we’d kick up on the list pretty high. Is that invalid? It is right now, because of how we think about software (software systems for writing and publishing aren’t as valid as print) and audiences (audiences that have audiences aren’t as valid as publications that have audiences). But I don’t think that’s going to last, and I’m urging leaders in academia to wake up and smell the new era. Like Wall Street, they undervalue what’s right in front of their eyes because their framing concepts are wrong (print/citation vs. online/linking).
I think you’re confusing the actions that are rewarded with the mechanism for rewarding those actions. Scientists do not get rewarded for writing papers. They get rewarded for generating research results. The papers are the mechanism through which they prove/announce that they have achieved those results. The papers are not an end unto themselves.
If you’re up for tenure and all you have to show for the last 5 years is that you’ve published a lot of highly-cited and well loved review papers but with no original experimental results, then you’re likely in trouble.
New forms of communication are worth exploring and trying to integrate into the current system of priorities for researchers. But publishers aren’t the ones who set those priorities. Those are set by funding bodies and institutions. We can certainly present these ideas to the leaders in these areas but they’re the ones who will have to realign their priorities. Our job is to serve the needs of our readers and authors, not tell them what their needs are.
I think a system that offers rewards for spending time networking is a system that then devalues actual research. If I get equal career rewards for posting my status on Facebook as I do for generating experimental results, then I’m going to spend an awful lot of my time online talking about science instead of at the bench doing science. And I don’t think that fits well with the priorities of most funding agencies.
And, if I’m running a teaching university, I want the professor who interprets, analyzes, and synthesizes. If I’m running a research university, I want the grants, etc. If I’m running a hybrid (which most are), I want a little of both.
But where’s the balance? Tweeting is a lot easier than coming up with a research program and generating results. Are you offering equal rewards for both? If, as seems to be likely, the rewards for blogging and such are going to be minor as compared with pulling in grants, then why would researchers waste valuable time on something that doesn’t pay off very well?
All that said, here’s a superb example of the sort of thing your article calls for:
http://www.businessinsider.com/before-you-buy-concert-tickets-see-where-your-facebook-friends-are-sitting-2011-8
The idea here is that Ticketmaster incorporates Facebook data. When you buy tickets for an event, you can see where your friends are sitting and buy tickets near them.
Not sure how this would translate to scholarly publishing. Maybe meeting organizers could show you which of your colleagues had signed up for a meeting so you’d know if it was worth going. Or you could alert readers to new articles by authors who are in their group of colleagues….
I think the value accorded to different activities may well differ considerably by field. E.g., in law, journal articles published in law reviews are not peer reviewed, so presumably they do not carry as much weight as articles in other fields. On the other hand, I’ve been told that some senior law professors run blogs in their areas of specialization that have become very authoritative and important in those areas, and one can imagine that this influence is duly noted in performance reviews.