Scientists and publishers share a deep fascination with new technologies. I spent years in a technology driven laboratory and we couldn’t wait to get our hands on the latest computer workstation, microscope, or piece of benchtop equipment to figure out how we could put it to work furthering our research goals. Publishers react much the same way, with an excitement over seemingly every new device and a drive to see how quickly we can (and whether we should) bring our content to that device in a useful manner.
Which brings us to the flavor-du-jour of the technology world, wearable computing (e.g. things like smartwatches or Google Glasses). While one of my Scholarly Kitchen colleagues has expressed great enthusiasm in this area, I remain skeptical–what exactly is it that these devices do better than what we already have?
Marco Arment recently made the point that the key factor in the rise of the smartphone is portability, the fact that the device is always with you. But then he asked whether glasses or watches add anything significant beyond this:
But why do we need “smart” watches or face-mounted computers like Google Glass? They have radically different hardware and software needs than smartphones, yet they don’t offer much more utility. They’re also always with you, but not significantly more than smartphones. They come with major costs in fashion and creepiness. They’re yet more devices that need to be bought, learned, maintained, and charged every night. Most fatally, nearly everything they do that has mass appeal and real-world utility can be done by a smartphone well enough or better. And if we’ve learned anything in the consumer-tech business, it’s that “good enough” usually wins.
I was thinking about this as I watched MacMillan’s Digital Science group’s video on “Imaging the Laboratory of the Future”. I have great admiration for Digital Science–they provide a new paradigm for publishers looking to invest in and develop new technologies and have connected themselves to a really interesting group of startups.
But this video, offering a concept for the lab of the future, shows that even a group this smart hasn’t quite figured out a “killer app” for Google Glass. Take a look, and tell me in the comments if there’s a single thing in there that couldn’t be done just about as easily (and likely less expensively) by a smartphone, a tablet or even a laptop. If we assume the development of the voice and gesture controls necessary for the Glass interface, what’s the great advantage here?