Path in Arburetum Volcji Potok - whit glod cro...
Image via Wikipedia

Not long ago, technological distinctions seemed so clear that we digitized them — 1.0 and 2.0 and 3.0, a deterministically tinged march of inevitability. Social media was new and suspect. Mobile would create a distinct economy, paralleling an equally distinct economy of Web-based e-commerce. Each technology piece had its own separate fate, like puzzle pieces possessing freewill.

We defined technology by its differences.

Now, it seems these differences and the resulting terms have been thrown up into the air. Is a blog social media? What about text messaging? What’s the Web? What’s not? What’s more directly “of the Internet”? What’s mobile? What’s in the cloud? Where is Web 2.0 now? Is the social Web really the entire Web? Is it now the Internet? Or something more? What is the phone system now — landline, cell, Skype? All three? What about Facetime on the iPad?

Surely we can delineate the technology frontier when it comes to devices — a cell phone is not a computer, a computer is not an iPad, and an iPad is not a laptop. They are each very different. Except that now there’s an app store for Apple laptops, Microsoft Office documents work on iPhones, you can use apps to text from tablets to phones to laptops, everything is an email device, and everything has a browser. They are all connected computing devices.

Are devices like milkshakes, distinctive only by flavor, size, and toppings?

Where are the media boundaries we keep talking about — mobile, touchscreen, networked, social? Isn’t everything all of the above now?

Isn’t the present equivalent to the imagined future? Isn’t the present a sufficiently rich proxy for what’s to come? Is there anything more to wait for?

Mitch Joel talked about this last July in a post urging everyone to become a “Presentist,” as opposed to a futurist:

It’s a classic business [problem]: brand misses the Internet wave, so instead of starting slowly now and engaging, they feel they may be best served by waiting on the sidelines for the next wave to hit. They mistake the complete shifting of our world for a fad instead of what it truly is: a fundamental shift in how people are connected, consuming and creating media. The opportunity is still here for you to be present. Become a Presentist. So, where is all of this going? Who knows and who cares.

Joel’s argument here and elsewhere boils down to the fact that it’s not the next thing that will make sense, but the now thing that already makes sense — there is something fundamental tying the past, the present, and the future together, a deeper theme than the iPhone 3GS versus the iPhone 4 versus Android vs whatever. Being a “presentist” and focusing on the now can clarify thinking and spur action, because figuring it out now is equivalent to figuring it out later — except that you figure it out sooner.

His point dovetails nicely with some thinking going on elsewhere about the pitfalls of thinking about technology as something either new or separate. Tom Slee recently wrote on his Whimsley blog a very interesting post about how “new” technologies now define themselves by what they unleash, not by what they are:

Maybe we should stop talking about “information and communication technologies” or “the Internet” or “new and social media” as a single constellation of technologies that have key characteristics in common (distinctively participatory, or distinctively intrusive, for example), and that are sufficiently different from other parts of the world that they need to be talked about separately. . . . Drawing inappropriate boundaries around “new and social media” can also exclude essential elements of a story. A week ago a BBC reporter on the radio described how, within days of seizing control of Benghazi, the Libyan opposition had set up a newspaper and two radio stations alongside a web-based radio station. An approach that focuses on “new media” would have to include the web radio and exclude the other two initiatives, but to do so would misrepresent the message of a sudden flowering of speech. The Internet is just one of many channels, and activists are using all the media at their disposal. Better, perhaps, to avoid drawing the boundary at all.

Slee goes on to talk about how defining laws and rules separately for the Internet is probably no longer sensible, explains how thinking of the network as a stereotypical network doesn’t reflect the varied networking architectures in existence, and proposes a set of alternative distinctions, including things like “displaced media” and “multi-channel outlets.”

Henry Farrell expands on Slee’s ideas in a post entitled, “Against studying the Internet.” Farrell’s point is that we are likely at a juncture where technology isn’t the interesting thing. It’s too pervasive to be uniquely interesting anymore. It’s not the new, it’s the now. Because of that, what people are doing has again become the interesting thing:

Instead of wanting to study ‘the Internet’ or ‘Facebook’ or whatever, we should investigate the possible existence or relative strength of various posited mechanisms which causally connect certain explanatory factors with certain kinds of interesting outcomes. . . . The relative efficacy of these mechanisms (or, better, the circumstances under which they are likely to be more or less efficacious) should be the focus of investigation. Instead of asking ‘does Facebook help protests in authoritarian regimes?,’ one would ask questions such as ‘does social influence from peers make individuals more likely to participate in demonstrations?,’ ‘does widely spread information about protester deaths make individuals more or less likely to participate?,’ ‘does government-provided information make citizens less likely to participate in anti-regime protests?’ and so on.

Technology is no longer the thing seeping into our users’ lives. Rather, their lives are being lived with familiar and useful technologies in their midst. Technology distinctions are secondary to user intent and purpose.

Maybe the Internet, the Web, the smartphone, the blog, email, and everything else digital hasn’t been about “new media” or technology at all. Maybe it’s always been about personal empowerment, socializing in a peripatetic world, keeping up with a rapidly changing schedule, eliminating hassles, learning new things, knowing things at the right time, and connecting easily with other people.

Maybe it’s been about creating trust networks and anchoring communities, creating better information prosthetics, and finding ways to feel like we’re in more places at once than physics otherwise allows.

Maybe you don’t need a technology strategy, a mobile strategy, a social media strategy, an email strategy, a Web strategy, or a digital strategy — maybe you can just have a customer strategy and accept the “now” that is no longer new.

“New” technology is ultimately being used to pave a path back toward ourselves, to fulfill our aspirations. What we’re doing today is all about helping people manage their lives more efficiently and feel more in control — rather than making something for email, something for tablet, something for mobile, something for Web. Internalizing that reality may help us may make better design, marketing, and product decisions.

Enhanced by Zemanta
Kent Anderson

Kent Anderson

Kent Anderson is the CEO of RedLink and RedLink Network, a past-President of SSP, and the founder of the Scholarly Kitchen. He has worked as Publisher at AAAS/Science, CEO/Publisher of JBJS, Inc., a publishing executive at the Massachusetts Medical Society, Publishing Director of the New England Journal of Medicine, and Director of Medical Journals at the American Academy of Pediatrics. Opinions on social media or blogs are his own.

Discussion

5 Thoughts on "Slicing and Dicing — Do Distinctions Between Users' Technologies Make Sense Anymore?"

Good post Kent. I completely agree that our actions need to be driven by user behavior, rather than implementing technology for technology’s sake, or because someone is pushing a particular agenda/business plan.

As an example, do journals really need their own individual siloed apps? Given that for most (if not all) journals, the vast majority of users come to article by following links from searches in PubMed and Google, or from links in e-mailed tables of contents, what use is an app that is not directly opened by these links? Apps seem like a misguided attempt to reinstate the idea of browsing print issues of individual journals, a practice long-abandoned by readers facing information overload. Yet every publisher out there seems insistent that they simply must have an app. Why? What is the real world use case for a journal app that isn’t better solved through a mobile optimized website?

The post also brought to mind this article about China’s alleged “Jasmine Revolution” which seems to have no real-world substance and yet has caused great efforts by China’s security forces as they chase online phantoms rather than addressing actual activities.

I think you are onto something here, Kent, and I am reminded of digital printing technology as an example. Photocopiers began coming into widespread use in the 1950s, but it was only about fifty years later that the utility of this technology for publishing became evident when Lightning Source used it to provide POD linked to a distribution system through Ingram and then Google gave rise to the “long tail.” This became, for scholarly publishing, what John Thompson dubbed “the hidden revolution” in his 2005 “Books in the Digital Age.”

Yes, a good addition. Photocopying really did start a lot, but so did other “modest” technologies, like fax, mimeographs, etc. I remember running the mimeograph in junior high, a smelly but good technology for making small runs of educational materials locally. The goal? Save money, give teachers more autonomy, and give a 9th grader bored with school something innocuous to do with mild chemicals. Fax was actually bordering on being a bigger publishing technology when the Internet came along and trumped it, with email shoving it aside rudely — but because it solved the duplication and distribution issues better. SMS/text messaging is now a major source of donations for many charities. Why? Because it’s a common technology and a time-saving way to donate money spontaneously. These types of stories are legion — Twitter didn’t know what it was for, TinyURL and bit.ly didn’t know their futures, until users started putting them together because they saved typing time.

Arguably, Ctrl-C/Ctrl-V is one of the most important useful inventions to help solve the duplication/distribution workflow.

The technology and the use often meet up in unanticipated ways, but it’s always the utility that makes it work.

The flipside of this to me is that there are certain solutions that are best realized using certain technologies, yet some people — because you’re using technology — think the solution is either technology-centric or requires technology oversight. We need to get beyond this, or else we’re going to continue to think that “now” solutions are somehow weird and alien, when in fact they’re normal and current.

I’ve heard of too many organizations with divisions between techno-philes and techno-phobes, with IT departments trying to dominate strategic decisions, and so forth. Folks, none of this matters to the user! To them, their technology is normal, and they’re trying to use it to solve real problems.

Comments are closed.