Scientists and publishers share a deep fascination with new technologies. I spent years in a technology driven laboratory and we couldn’t wait to get our hands on the latest computer workstation, microscope, or piece of benchtop equipment to figure out how we could put it to work furthering our research goals. Publishers react much the same way, with an excitement over seemingly every new device and a drive to see how quickly we can (and whether we should) bring our content to that device in a useful manner.

Which brings us to the flavor-du-jour of the technology world, wearable computing (e.g. things like smartwatches or Google Glasses). While one of my Scholarly Kitchen colleagues has expressed great enthusiasm in this area, I remain skeptical–what exactly is it that these devices do better than what we already have?

Marco Arment recently made the point that the key factor in the rise of the smartphone is portability, the fact that the device is always with you. But then he asked whether glasses or watches add anything significant beyond this:

But why do we need “smart” watches or face-mounted computers like Google Glass? They have radically different hardware and software needs than smartphones, yet they don’t offer much more utility. They’re also always with you, but not significantly more than smartphones. They come with major costs in fashion and creepiness. They’re yet more devices that need to be bought, learned, maintained, and charged every night. Most fatally, nearly everything they do that has mass appeal and real-world utility can be done by a smartphone well enough or better. And if we’ve learned anything in the consumer-tech business, it’s that “good enough” usually wins.

I was thinking about this as I watched MacMillan’s Digital Science group’s video on “Imaging the Laboratory of the Future”. I have great admiration for Digital Science–they provide a new paradigm for publishers looking to invest in and develop new technologies and have connected themselves to a really interesting group of startups.

But this video, offering a concept for the lab of the future, shows that even a group this smart hasn’t quite figured out a “killer app” for Google Glass. Take a look, and tell me in the comments if there’s a single thing in there that couldn’t be done just about as easily (and likely less expensively) by a smartphone, a tablet or even a laptop. If we assume the development of the voice and gesture controls necessary for the Glass interface, what’s the great advantage here?

David Crotty

David Crotty

David Crotty is a Senior Consultant at Clarke & Esposito, a boutique management consulting firm focused on strategic issues related to professional and academic publishing and information services. Previously, David was the Editorial Director, Journals Policy for Oxford University Press. He oversaw journal policy across OUP’s journals program, drove technological innovation, and served as an information officer. David acquired and managed a suite of research society-owned journals with OUP, and before that was the Executive Editor for Cold Spring Harbor Laboratory Press, where he created and edited new science books and journals, along with serving as a journal Editor-in-Chief. He has served on the Board of Directors for the STM Association, the Society for Scholarly Publishing and CHOR, Inc., as well as The AAP-PSP Executive Council. David received his PhD in Genetics from Columbia University and did developmental neuroscience research at Caltech before moving from the bench to publishing.

Discussion

39 Thoughts on "Is Google Glass Part of the Laboratory of the Future?"

Given this is aimed at post-docs/students, the phrase “without leaving your bench” will doubtless appeal to many PIs..

In thinking along those lines, there’s way too much here to distract a student from actually doing their experiments. If I’m a PI, I want them full time at the bench doing trial after trial, not interrupting experiments to video chat or update their Facebook pages. I’m thinking that for a PI, this might be a more desirable piece of equipment to install in the lab for students:
http://www.wouldyoubelieve.com/cone.html

And that gets to the problem with much technology development done by non-scientists on behalf of scientists, which is proposed because a technology to do the thing exists, rather than because the technology serves a need. Think of how many things publishers have come up with to help researchers find more articles to read, and contrast that with how researchers are asking for fewer articles to read. Or all the social networks for scientists that ask researchers to spend lots of time and effort on things other than doing actual research.

And that’s not even getting into the ads that are gonna run on something like this. Think about how annoying pop-up ads are on a website. Now put a pop-up ad on every single thing in your field of vision. Shudder.

Google Glass is a transitional technology. While there may be some use-cases (here I am thinking medical and military – but could also see the application in a laboratory where a researcher could easily combine a recording of the experiment from their perspective along with audio notes) where Google Glass might be immediately helpful, it otherwise falls into the category of a novelty gadget. That being said, it is an important novelty gadget because it provides a platform for Google and other software companies to work on the user interface, the databases that support visual overlays, and the connections to real-world objects (the so-called Internet of Things). This is important not because Google Glass will ever become mainstream (it is just too socially awkward and stigmatizing) but because the technology will eventually become invisible. The technology will become refined and much smaller and embed invisibly in prescription glasses (this is already starting to happen) and contact lenses (this is already technically feasible). Eventually there will be implants for the retina or optic nerve at which point humans will have technology that essentially provides what we would have previously called telepathy. As Author C. Clarke (no relation) said, “Any sufficiently advanced technology is indistinguishable from magic.”

I’m not claiming that such embedded technology will be a good thing, just that it seems inevitable. When the technology becomes invisible and cheap is when things get interesting.

What exactly is it that these devices do better than what we already have?

I’m not sure this is really the operative question. Smartphones haven’t taken over the cellphone marketplace because they do a better job of what basic phones did. They’ve taken over because they do things that basic cellphones couldn’t do at all. (Well, and one other reason: smartphones are also taking over because phone companies don’t want us to have the option of not buying data from them, so they’re making basic phones less and less easily available on their plans.)

So I think the question we need to ask about Google Glass isn’t “does it do a better job of what our existing devices already do?” but rather “does it let us to do desirable things that we can’t yet do?” I’m not sure the answer to that question is yes, but I think the distinction between the two questions is important.

That question works if you’re only thinking of smartphones as a replacement for a phone. But really, they’re small, handheld computers. What does your smartphone do that your phone and computer (and for that matter your camera and other specialized devices like barcode readers) together couldn’t do? What’s new about it is the portability, the convergence, the always-connectedness, the always having it with you at hand. That’s why it’s better than what we already have. Wearable computers don’t seem to offer that same shift, just move it from your hand to your face/wrist. Is that enough of an improvement (or an improvement at all)?

I guess it all depends on whether you consider the ability to access the internet by pulling a small device out of your pocket a new capability (because before smartphones, you couldn’t do it in a cab or in line at the bank) or the extension of an old capability (because before smartphones, you could still access the internet).

That to me is the big difference–the portability of the experience you already had, not necessarily the creation of a new experience.

Having worn Glass now for 7 months, I definitely see possibilities. The screen is out of the way, sleeping but accessible instantly hands-free. I can’t comment on the lab, but the few apps availalbe show promise.
When we think about content, there is an app called Field Trip, that alerts you (by chime, you can decide if you want to learn more) if you are in a notable location about which there is information. In Charleston, I was shown images of the streets I walked and what they looked like after Civil War damage. In London, architectural pictures (with extensive audio commentary) about the churches I passed on the way to the conference. Wordlens offers a translation function nothing short of magic.
Can you do the same thing with your phone? Well, there are apps for that, but once you’ve experienced the inline and less obtrusive way, you can see the potential.
My two cents after 7 months of datapoints.

I can see a utility for it. Since all your QR-labeled samples are unreadable without a scanning device, it would be very helpful to just look at the sample through Glass & the info would appear…. sample confirmed.

1) Someone actually uses QR Codes? Really?

2) Are the benefits of just glancing at it while wearing special glasses over waving your phone at it enough to justify additional costs, extra device, maintenance, charging, etc.?

1) QR or whatever, some code that’s illegible to humans. Yes, we do use them. Rules protecting patient information are becoming very strict and coding samples allows us to uniquely identify them without comprimising confidentiality.

2) Put yourself in the shoes of someone sitting at the bench with a pippette in one hand & a tube in the other hand. Think how much time would be saved for someone that’s doing this all day.

*this is not a moonshot idea

If confidentiality is required, then isn’t a tool that automatically allows you to identify and see all the details about a sample the exact opposite of what you’d want? Would this prevent you from performing blinded studies? Is a QR Code better than just creating a spreadsheet and labeling your tubes by hand (“sample A1”, “sample A2”, etc.)?

I think it’s an interesting use case though, but it also raises questions about security. What sorts of security measures do you have to install on your network and your devices to ensure patient/subject confidentiality? Given that Google is an advertising company, and the purpose of Google Glass is to sell more ads, I would assume that everything you do while wearing them will be broadcast to Google and available for sale to advertisers. Would this be ethically allowable in terms of the confidentiality that’s needed?

I’m surprised all of you are debating the merits of “apps” and comparisons to smartphones, while ignoring the single feature I’d consider most important for any scientist: Inobtrusive, instantly available recording of what you are *actually* doing.

There is always a tension between noting too much (making work prohibitively slow) and too little (risking slips of mind interferring with your work) in the lab notebook. Let’s consider a common case: Did I remember to add that drop of liquid to that tube? Pre glass: I must discard my work, try to reconstruct my memory of the past few minutes, or risk the integrity of my experiment. With glass: No more wondering – just “OK Glass, rewind.”

That’s an interesting use case. If you recorded all of your actions throughout the day, would this replace the need to keep a laboratory notebook? Couldn’t you just go back to the video to see what you did for a given experiment? Is this as efficient a method as careful notekeeping? Does this lead to a reduced level of thoughtfulness and attentiveness in performing one’s experiments?

I don’t think it would replace the need for a laboratory notebook, but I think it could definitely reduce the time used on record keeping. I use a notebook for two things: To know what to do, and to know what I have done. While I might be able to easily remember all the individual steps in a protocol, it’s much harder to be 100 % sure that I actually am/have been following it without recording this in some way. Even if it’s just placing a checkmark at the appropriate place in a protocol pasted or hand-written in the lab notebook, this takes up surprising amounts of time and necessitates carrying said notebook around.

I will always want to keep a notebook with me in the lab (even if I had an electronic notebook), but I don’t want to have to use it if there’s a better solution for proper documentation. If I can get protocol steps on a HUD, that’s a nice feature, but to me recording the actual process is far more important.

Of course, behavior might have to be adapted if moving away from text-based note-taking. But there would be no less reason to be attentive when using a Google Glass/other video recording setup than with a regular notebook – the recording is for confirming to yourself in the *future* (perhaps in 5 minutes) that what you are doing *now* is (was) correct. You still have to confirm for yourself *now* that what you are doing is right, no matter how you record it.

The details of a standard for multimedia note taking would have to be worked out over time. But the basic principle is simple: If you are able to see it, then an eyepiece camera should too. Video recording might not be sufficient for all cases, in that case voice recording might be useful as long as it’s not disturbing co-workers. Other workers might have a legitimate expectation not to be unneccesarily recorded on camera. Workers should have a legitimate expectation not to have to record *themselves* working at all times. Ethical, legal and social implications should not be ignored, but neither need they stop us from taking advantage of improvements in technology.

I think it would be novel if this technology came as an add-on to prescription glasses. Rather than buying new Google Glass glasses I could just install it into a pair of my current prescription glasses and use it. That would be very handy.

I would think that the obvious advantage was that it frees your hands. In a lab setting hands are very important.

Could you accomplish the same thing by mounting a screen on a wall or setting your phone on a shelf?

Not in most cases. The empirical question is, for a given lab activity, how much time is spent using hands on a computer, that could be eliminated? This is a time and motion question. The bigger question, alluded to above, is what new things can be done because hands cannot now be on the computer because they are otherwise occupied, as they often are?

But this is not an argument for wearable computing devices, it’s an argument for hands-free input methods. Is there an advantage to inputting text by voice or commands by gestures into a pair of glasses rather than doing the same into your phone or tablet?

Sure. The glasses move with you and look where you look.

I imagine that this technology has evolved from the pilot’s heads up display, which is very useful.

Again, what is the specific use case where this matters? Where do a laboratory researcher’s needs overlap with those of a fighter pilot?

The heads up display function is potentially useful wherever one needs information but does not want to take one’s eyes off the work at hand.

This kind of technology assessment/mapping is something I do a lot of. I even developed a system for the Navy that did it for everything going on in exploratory development (or budget category 6.2). Systematic assessment is not simple so I am not going to do it here (I am available), but first you break the innovation down functionally, as I have done a bit of above. Then you look at the arena of use to see where those component functions apply or could apply, singly or in combination. And as I have said one also has to specify the time frame of the uses, in order to do the assessment.

A couple of simple cases come to mind initially. One is the video described above by Jarlemag, which could be a valuable accompaniment to a journal article on the procedure in question. There are entire journals devoted to procedures and language is a poor way to convey them. I can see it becoming required in 30 years.

Then one can flip that idea and have the glasses computer walk the researcher through a new procedure, including answering questions along the way (and not with canned answers). Depending on pattern recognition technology it might even kibitz during the work. There is a lot of potential for AI here.

Having the computer know where you are looking is itself a potentially useful function. Back when I was at Carnegie Mellon a friend developed the first “eye movement” machine that tracked where people looked when they did certain tasks. It might be useful to be able to say “this does not look right” and have the computer know what “this” refers to.

There is lots more of course. This is a functionally a very basic technology.

My question would be, what laboratory activities occur regularly that are both so intense that one cannot look away even for an instant, yet where one must also at the same time need to be checking other sources of input. The former is very rare in my experience, and when one is doing something that intently, having constant popups in one’s field of vision may be more of a problem than a blessing.

Having been the Editor in Chief of a biology methods journal for several years, we did experiment with doing video protocols. These proved to be generally useful teaching tools for one viewing but very poor tools for regular use. Most protocols are step by step procedures, and most steps are routine–you don’t need a video to show you how to make up a solution of sodium chloride. And the key to putting a protocol into use is to constantly refer to specific steps which is impractical at best on a video, requiring fast-forwarding and reversing to hit the right point, then sitting through minutes of explanation for what one could find and read in a few seconds. We found that including very short videos where there was a difficult-to-explain or -diagram physical manipulation was helpful, but in general, a written protocol was a vastly more practical interface for use.

Tracking eye movements is a helpful activity in certain types of research, usability and design studies for instance. It is not a common technique of value to most people working in laboratories though. Most of the other things you propose require the development of technologies that don’t yet exist. I generally find technologies like Siri to perform poorly in basic tasks using simple commands. Asking them to drive and automatically troubleshoot advanced experimental techniques seems wishful thinking at this point.

I never said anything about not being able to look away for an instant, although that would be a good case indeed. I am thinking of convenience not necessity. I like the procedural guidance and discussion concept because I do a lot of work with complex procedue systems. They are an entire column in my taxonomy of confusions.

I do not do lab work so I am thinking in terms of cooking, which involves complex chemical procedures. Not having to keep running to the cookbook would be a big help. That is a near term ability. I presume this is true in the lab as well.

But consider the common case where there are multiple recepies for the same stuff. Here one could ask about the variations, or even discuss them because there is a lot of discussion on the web. I have my favorite experts so I could ask how they do it. The computer could also know what I have in stock, as in “do we have any walnuts, have they been opened and how old are they?” (Here I am reminded of the old Monty Python joke that the reason to get a PC was to know how many hats you have. That was before the web.) Of course these are longer term prospects, but we are well along with question answering technology. (You have not said which time frame you want to discuss. See below.)

I never said anything about not being able to look away for an instant, although that would be a good case indeed.

I assumed that was what you meant by, “The heads up display function is potentially useful wherever one needs information but does not want to take one’s eyes off the work at hand.” Usually what one does is print out a copy of the protocol onto a piece of paper and sets it on the bench next to you (or tapes it to the wall) for easy access. The upside of using paper is that it is disposable, as one is often working with radioactive materials, caustic chemicals or messy animals/tissue samples. It’s a lot cheaper than an expensive pair of connected goggles, and that concept brings up another issue here–are these goggles meant to double as eye protection? Because they’ll either need to be able to withstand splashes of nasty things or require the experimenter to wear a second set of goggles on top of them.

Doing a laboratory experiment requires a good amount of preparation. Reagents are expensive, specimens and tissue samples rare and valuable. If you’re just figuring out which protocol to use, and whether you have the reagents to do the experiment while you’re in the act of doing it, something has gone terribly wrong. All of the things you mention are useful, but again, there’s no use case described where doing it via Google Glass instead of a laptop offers anything superior. Those are things done in advance, not in the moment.

And the long term prospects questions should be less about time than about the necessity to develop other dependent technologies. If X somehow comes into existence, then Y might be useful. If X doesn’t yet exist, it’s hard to make a case for Y.

Well that is what basic technologies are all about. The car did not take you where a horse could not, quite the contrary actually. And the car required the development of a lot of attendant technologies, from roads to tires and refineries. still the car prospered. The functionality (and cases) I have described makes computer glasses a very promising technology.

Why are the things you’ve described superior using a wearable computing interface? I won’t argue that it’s not helpful to receive expert advice and troubleshooting, but what is the advantage of having it on your face rather than in your hand or on a screen directly in front of your face?

I have already answered these questions several times. Maybe we have exhausted the topic (or me). Also the nesting of comments seems to be off. Perhaps that is my fault.

I must have missed them, as the only thing I see in your answers where the Glass interface would make a difference is an instance where one cannot look away from one’s experiments, which you later said you didn’t say. I’m not asking about the concepts, I’m asking about interface design.

By the way David, it is entirely possible that the kind of stand in one place and unhurridly read the settled instructions from a piece of paper case that you describe is not a place where these glasses would help. I am not claiming otherwise, merely trying to frame the space of functional possibilities. I have to think that there are lots of different existing lab practices and possibilities for new ones.

Put another way you seem to be describing cases where the lab work is cut and dried while I am thinking of cases where getting the lab work to work is part of the challenge. It might even be the whole challenge.

Understood. I’m just asking for specific examples, particularly specific examples that are do-able in the present moment. Let’s say you’re a salesman for this product, and I’m a lab head. Tell me why I should buy it rather than the exact same product from another company that’s delivered on the tablets and smartphones everyone in my lab already owns.

There is an important distinction to consider here. My first published paper (1970s, ASCE) bore the strange title “The engineering computer revolution, over or just beginning?” The point was that we had pretty well computerized most of the standard engineering analytical procedures, but now we were looking at computations that humans could not do. My example was finite element analysis using matrix algebra which required that our engineers go back to school.

So the really interesting question is not how will this glass technology help us do what we now do, but rather what might we do that that we presently cannot? Following complex instructions while not taking one’s eyes off the work sounds like a good candidate. So do cases where a lot of realtime sensor data is involved. (But you have to be careful not to be jammed by the input. I have seen that happen in military contexts.)

Assessing emerging technologies is only wishful thinking if you insist that it has to happen. I am not doing that. However and as I said, AI has a big role to play here but AI is something I track.

Whenever speculating about a new technology it is important to specify the timeframe. The one year, five year and thirty year possibilities are very different but they are frequently confused together.

I think the hands-free aspect as some merit in the lab. Tasks often require two hands, and sterile technique. It would be useful to have written methods acessible hands-free through Google Glass, or lists of samples to prepare. This would still be specific to information that cant fit on a single sheet of paper, or perhaps cant be carried about with you as you walk around the laboratory. I also think it would be very useful for quick photographic documentation of field or glasshouse samples. Most experiments dont require huge high res photos, in those cases Glass might provide a very fast and simple camera function. Thirdly, if it can browse simple websites hands-free it would be a boon. I am always wanting some extra information mid-experiment that can easily be found online. I think it is all about being hands-free and always on.

I believe it is a lab if the present, for the VERY NEAR future.
It’s the natural evolution of the hardware. Potential to radically transform industries like Healthcare and Education!
See my links.
Natural Evolution Up to GoogleGlass
http://youtu.be/Psq-T2O0LDs

#GoogleGlass vs. #Healthcare http://t.co/V46j9XTCQc via @ZGJR A new post w my VISION for Healthcare Equation #MedEd as well

2 #TEDx on wearable #GoogleGlass #mHealth “OK Glass: I need a surgeon http://youtu.be/fo3RsealvGI
“OK Glass”:Disrupt HC
http://youtu.be/DVzkw7y4_u4

#GoogleGlass in #Surgery http://t.co/W0EJQy9U8s
&MedEd”OK Glass:Teach me Medicine!” http://t.co/0vYPZcrzKk

My blog RGROSSSZ.wordpress.com about #Tech #Innovation in #Health # mHealth and #MedEd

Thxs
@ZGJR

Comments are closed.