On June 27th of this year, Google delivered what is currently the state of the art in keynote demonstrations. At Google’s IO conference, Sergey Brin interrupted Google Senior Vice President Vic Gundotra’s talk about the latest developments with Google+ in order to demonstrate the Google Glass project. What happened next was spectacular. A live six-person “hangout” on Google+ turned out to actually be occurring one mile above the Moscone Center in San Francisco in a blimp. Furthermore, the hangout video was being delivered via Google Glass prototypes, and, as you will see in the video below, jumping out of the blimp turned out not to be an impediment to the communal video experience. Neither did stunts on bikes, or for that matter abseiling off the side of the building. All of this went out live, without a glitch, streamed across the globe. Impressive, most impressive.
Google Glass is not the only thing Google has been up to this year. Their latest release of the Android operating system has their answer to Apple’s Siri assistant (Google Voice Search), as well as a digital butler called Google Now. They also unveiled a major shake-up of their flagship search interface with something they call Google Knowledge Graph. That’s quite a bit to explain and unpack, and perhaps you are wondering why these things are the subject of a blog for scholarly publishers and other interested parties.
To try to answer that, let me take you back to October 2011, to the “Ask the Chefs” feature of that month. The question was “What do you think is the most important trend in publishing today?” and Kent’s thoughts were what sprang to mind as I was trying to digest what Google was up to. Here they are again:
The most important trend for scholarly publishers is the integration of information into displays utilized at a point much closer to where the action is — in medicine, it’s the bedside or ward; in science, the lab or bench; in education, the classroom or virtual classroom. While we continue to churn out articles, synthesized information providers are taking the salient parts, integrating them into other systems, and generating value . . .
In addition, I was musing on the theme for the 34th SSP Annual Meeting; “Social, Mobile, Agile, Global: Are You Ready?” which occurred prior to the Extreme Social Networking! stunt recorded above.
Now, before we unpack the Google stuff, let’s take a quick look around at the rest of the players battling for control of the Internet. Apple are a hardware company more or less tightly coupled to some strong rigid design principles that define their user interface. It has served them well in the five years since they launched the first iPhone. This year, the focus was on a better display; nice, but not exactly earthshaking. It’s worth also calling attention to what appears to have been an act of hubris — in order to end a relationship with Google early, they decided to try and go it alone with a geolocation/mapping functionality that has had some considerable teething problems due to issues with the underlying data.
Microsoft are gearing up to launch Windows 8 featuring “the user interface formally known as Metro.” They are trying to bridge the desktop and tablet user interfaces (there’s that word again) with one operating system and set of design metaphors. It’s not clear how successful that is going to be. They are also going to get into the hardware business with the forthcoming “Surface” tablet. In short, they are playing catch up. Again. As a sidebar, Microsoft do some incredibly amazing things. Kinect is a genuinely impressive piece of kit that sadly has struggled to really make wider inroads despite some fascinating proof of concept uses of the tech. They also have some brilliant image manipulation programs (Photosynth and Microsoft ICE are outstanding tools). I think these haven’t been better integrated because Microsoft is still scarred by that EU antitrust case from the last decade.
Facebook has been busy proving the old adage that a fool and his money are easily parted still holds true (even if you give out all the info needed to figure out the true market value of your offering). And! Yahoo! Are! Still! Alive! Amazingly! But it’s been a while since we bothered to note what their plans would mean for us, hasn’t it? Back to Google.
Now, an important point to bear in mind as I try to summarise all the Google things I listed above (with a couple of extras thrown in just for fun) is that they are all interlinked. This makes things rather complex, so I’ve drawn a diagram to help you. Feel free to print it out and scribble on it, or even to wade in and add to it for the benefit of the rest of the Kitchen readers. It’s a Google Doc of course. [What Google Does Next Mind Map (click this link to open it)]
If you have the diagram open (in a new window for you), the top layer shows the current Google interfaces that exist. Some are still in the experimental stage. Then we have a selection of Google data tools and properties. Below that is the Knowledge Graph. Then we have various data and content sources. Surrounding all of it is the Google Index and the computing power they can bring to bear on any issue they feel like solving. The arrows highlight some of the data flows. The green ones are more speculative than the black ones. Assume that any Google interface you use feeds back all sorts of info to Google via the “You are the product being sold” box at the bottom.
Let’s start with Google Glass. This is more than a video camera mounted in the frame of some spectacles. It looks like it will be your personal heads up display (HUD), delivering and accepting information to/from you in a context aware manner. If they can pull this off, it’s going to radically reconfigure our notions of what being connected to the Internet actually means. If you are a doctor, how useful is it going to be to pull up comparison images or recommended treatments whilst you are with a patient. Perhaps you could use a hand with a diagnosis when looking at the medical chart — a quick bit of OCR, and you are running the vital signs through an expert system. If you think that last one is waaay out there, here’s an exercise for you. Get a business card. Fire up the Google Goggles application. See what it does with the business card. (Now you can go scan all those cards you’ve never got around to sorting, into your contacts the easy way). Google Goggles is also rather good at identifying images, major landmarks that sort of thing. Combining the HUD of Google Glass with the rest of the ecosystem brings up all sorts of possibilities here. And Google is very serious about it. It looks as though Glass will interface with whatever mobile devices you have on you (I assume that means Android, but let’s got get bogged down there) thus decoupling the interface from the information device.
In thinking about the information device, it’s worth considering that so long as you are connected, you have at your fingertips one of the worlds most powerful supercomputers. It isn’t your device doing the voice processing or the image recognition, it’s Google’s servers. Take another look at that demo. Google did an epic stress test on their protocols for shifting information from Glass, through their infrastructure and to the rest of us, and they did it live, accelerating towards the planet at 9.8m/s2.
Next up, we have Google’s self-driving cars and Google Maps. And the reason these are worth paying attention to can be seen by the fact that Apple thought that maps were things that just needed to look pretty, instead of understanding that if you have a device that can resolve your location to a resolution of better than 10m2 you’d better have a dataset that can deliver to that. Here Google have done a variety of things. They’ve licensed datasets, bought companies with data (and appropriate technology) ,and then they’ve gone and realised that the Google Streetview cars can deliver a massive amount of further information to the dataset. There’s a brilliant article on this in the Atlantic. Do take the time to read it, but for now I’ll pull out a couple of key points for you:
- “Ground truth” isn’t just an idea that GIS (geographic information systems) geeks and cartographers obsess over. It really, really matters.
- They have some amazing data capture tools to help them with the ground truth stuff.
- You can’t do this all with machines. You must use people (I bet that one took some swallowing at Google HQ).
And here’s my favourite quote of the piece, from Manik Gupta, the Senior Product Manager for Google Maps:
If you look at the offline world, the real world in which we live, that information is not entirely online. Increasingly as we go about our lives, we are trying to bridge that gap between what we see in the real world and [the online world], and Maps really plays that part.
Reading this article was when it hit me — Google understands what mobile means on a truly fundamental level. It’s not actually about form factors, the patentability of rectangles, the pixels on the screen, or what combinations of figure gesture you can or cannot use. It’s about you, and the information you want, and the information you need, and the information you don’t know you need, until it’s there in front of you, contextualised for your specific location in space and time. Once you’ve arrived at this conclusion, building the tech for self-driving cars is fairly logical step. Just think of all the monetisation opportunities.
Of course context is far more than just space and time. It’s a unique thing that revolves around your particular set of needs wants and desires (here you may wish to take a moment to consider John Battelle’s Database of Intentions argument, still a great read all these years later). Well Google is rather good at that, which explains the other two developments that Google have been working on.
If you are the owner of a Nexus 7 tablet, you may have already been exposed to Google Now. It looks at the stuff in your calendar (plane flights for example), or where you are now in relation to where you want to be, and gives you information that should be of use to you. I’ve seen it tell myself and a colleague the latest time we could set off from the meeting we were in, in order to get to our next appointment. It did this by calculating where we were driving to next, and putting in the data it had on traffic conditions (using all that lovely maps data of course). I have to say it was rather impressive. Google Now also makes use of the all new and improved Google Voice Search. It’s capable of not only understanding, “What’s the weather this weekend?” but giving you the answer for where you are going to be (if you’ve given Google that data of course) rather than where you are. It does these things by a combination of natural language processing and using the latest of Google’s developments — the Knowledge Graph.
Knowledge Graph is a “Very Big Thing” for Google. Up until now, Google has only dabbled at the edges of the semantic web. Their motto has always been “simple algorithms, and lots of data.” In some ways, Google’s search is very dumb; it doesn’t “know” in any meaningful sense what somebody is searching for, it just has a mind-bogglingly massive database of statistically meaningful correlations to look through, along with some clever ways of going through that database with many parallel queries. It also does some clever but dumb work to pattern-match to previous statistical correlations you clicked on (that’s your search history). The Knowledge Graph changes that. For the first time, Google is starting to build on the foundations laid by others (open linked data sets) as well as it’s own efforts to systematically codify what things actually are. You can see this with the results it now brings back if you search for people or organisations. This is more than just ingesting Wikipedia by the way. If you want a real life example of what all this semantic linked data stuff can actually do, the Knowledge Graph is your go-to example. So is Google Now. You don’t have to know anything about the underlying technology; it just gives you information in context, when you need it, not just when you ask. With the Knowledge Graph, Google is parsing your query; delivering a set of search results; and as part of that delivery process, choosing to give you what it considers to be facts pertinent to your search. This is Google as a destination. This is Google using its tech to leverage your search query (however it arrives) in order that it can apply it’s tried and tested methods of filtering what data is important for that query, and then . . . funnelling the most visited things into the process for building more concepts for the machine to feed back to us. It’s exactly the same workflow process as they use for the maps project. Scroll back up and read that amusing quote about real things not being online. Turns out you can fix that “problem.”
Google As Destination. Google As Information Provider. There’s quite a bit to come to terms with in those two thoughts. My first thought was, How do we know they’ve fed the machine with facts they had the rights to use in that way? My second thought was, How on earth would we go about answering that, given that Google is probably the single largest holder of indexed facts and concepts on the planet? Then I started to wonder about the ramifications of turning facts, transcribed into text, back into facts so that they can be combined into complex statistical models in order to feed facts back to people who want to know about things. They have a motto, so that’s all right . . .
So where are we? Mobile isn’t about app stores and gestures, pixels and design motifs, pokes and likes and +1s. It’s about you, and where you are in time and space and interest. Google has decided that to serve these new needs, they need to be in the hardware business and also to actively build a very large scale semantic web. But even Google can’t build all of it (for reasons to do with the speed of light, if you really want to know), which is good news for us. The other thing that’s good news for us is that there’s nothing to stop us doing the same basic things. We can build knowledge domains. We can build contextual information systems for busy scholars with multiple needs.
But we’d better start seriously thinking about this: The future of information isn’t about refinements in search and retrieval sprinkled with dustings of presentation magic. It’s about active systems; predictive systems; things that model and monitor what a person is doing and accessing (watch as Google Goggles reads the academic paper at the same time as you do), and how we’ll build and deliver to those systems. I think we may want to pick up on ideas such as Conrad Wolfram’s Computable Document Format (interesting but flawed), and go beyond that — helping scholars to truly publish their knowledge in a manner that better enables the context and need aware hyper-localised, hyper-personalised future that is galloping down the road towards us. And then we’ll want to build the value tools that really set the information to work. Because if we don’t . . .
Discussion
20 Thoughts on "What Google Did Next"
I think you have an incorrect read on the Maps situation. John Gruber covers it well here:
http://daringfireball.net/2012/09/timing_of_apples_map_switch
Basically, the contract between Google and Apple was set to expire. Google had repeatedly refused to update the iPhone version of Maps to include features it includes on the Android version, things like Turn-by-Turn navigation, and Google had demanded more advertising space and more user data from Apple. Apple had long wanted control of their own key systems and made the leap sooner, rather than later, because, as you note, it takes user input to build a better maps system. Apple followed the exact strategy you outline above, licensing datasets and buying companies for several years. To catch up to Google, they had to bite the bullet at some point, and unfortunately that came sooner rather than later. Apple’s big mistake here was hyping Maps, rather than their usual under-promise, over-deliver approach. It should be noted that in the deal, Google has now lost an enormous portion of their Maps userbase, thus harming their ability to continually refine their product.
I also find it contradictory that you fault Apple for their strategy as a hardware company, yet laud Google, who seem to be moving into that area themselves. They’ve purchased Motorola to make their own phones, and are keeping a tighter and tighter rein on their Android partners (http://arstechnica.com/gadgets/2012/09/report-google-threatened-acer-forced-it-to-dump-rival-os/). And while you paint a rosy picture of Google’s invasion into every aspect of our lives, there’s no mention of the business model. How are all of these things going to bring profits to Google’s shareholders? They are, at heart, an advertising company. And I’m not sure I want an advertising company running my life and sticking its nose into my business.
One other confusing point in your post–can you explain why Google Glasses is more than just a gimmick? In the scenario about the Doctor you list, couldn’t everything you’re suggesting be done with a tablet or a phone, rather than through a pair of glasses? What is the advantage in separating out the interface from the controls for that interface? If we’re moving from a desktop with a separate monitor, keyboard and mouse to a laptop where they’re integrated, to a tablet/phone where the display is the interface, isn’t this a step backwards?
Ok, right. I’m not faulting Apple for their strategy per se, just noting that right now they seem to be in incremental update mode. I’ve done some reading around on the whole Apple gets into maps thing and it seems from those in the know (a few GIS consultants out there who seem to have some knowledge of what Apple was up to), that they badly underestimated the amount of data you need. I think Apple’s hardware first approach is different to Google’s data first approach. I’m only focussing on Google right now, because I think they are the ones doing interesting things (as Apple did when they moved smart phones out of the enterprise and into the consumer space). It remains to be seen what Apple has up its sleeve in the ‘magical and revolutionary device’ department. Steve jobs said something prescient at the launch of the iPhone. “We’re 5 years ahead of everybody else”. It’s 5 years since that statement. I wonder what’s next.
Google’s business model is adverts. AKA data about you to advertisers so they can target you more precisely. I think Google see adverts as just another class of data to be presented to the user at the appropriate time. So all this stuff helps them do that. Yes – all your points about privacy hold true and no doubt we could both hold forth at length on the subject. But here’s a truth I find depressing. Nobody else seems to care (pop quiz – let us know in the comments if you do !). Not at scale. Except the Germans, who have some interesting legislation in this area.
The business model for the self-driving cars is the one that really has me. Perhaps they’ll have some sort of api for all their data that car manufacturers can just tap in to.
By the way, whilst I’m impressed at Google’s potential to insert itself into our lives, it does take us to some rather interesting places. Cory Doctorow has an excellent short story on this http://www.scroogle.org/doctorow.html which is well worth a read. For this article I thought I’d do one side of the argument – the potential benefits. In James Gleick’s “The information” he writes at length about how the thought leaders of the day lamented the arrival of the printing press because it would allow any old person to be able to read someone’s thoughts… So how big are the downsides? That’s a function of society isn’t it. Society and the ability of our governing systems to effectively manage these things – oh wait.
Google Glasses could be a gimmick. But leave the particular instance of the idea and think about the concept (after all Apple had two goes at the iPad – remember the Newton?). My point about Google Glass, is that as an interface its aim seems to be to get out of the way. So Google Glass would take us from that annoying modern phenomenon of walking down a street dodging all the people staring intently at a small rectangular box held in their hands. If you’ve played with any of the AR astronomy apps, you could see Glass telling you what’s in the sky, what buses run from a bus stop, where the train is going and so on. Just there, when you need it. IF it works (and you are right, there’s a big IF there) you wouldn’t want to go back to having to whip out your phone fire up an app, input a query….
Have I answered your questions?
Sort of. Glad to hear an acknowledgment of the downsides to all this, as the post reads a bit, “Yay Google!”
First, I think Apple’s iterative nature is one of the keys to their success, and is how the company has operated for along time. Glenn Fleishman puts it well here:
http://tidbits.com/article/12856
Apple makes its money over the long term not just by introducing disruption, which would mean flash-in-the-pan products that spark and then fizzle, but by seeing disruption through into stable releases, each with significant improvements that appear to be incremental to a product’s design and capabilities.
Google has a serious problem in that area. They continue to improve things like Android, but because of their business model, they are unable to introduce those iterative improvements to the majority of their customers. Three months after the release of their latest operating system, only 1.8% of users have access to it (http://mobilesyrup.com/2012/10/02/ice-cream-sandwich-and-jelly-bean-combine-for-25-of-total-android-users-google/). Actual real world delivery is an important part of doing business.
And other Google products seem to be in decline as they iterate. Google’s search, their bread and butter, has been watered down and overrun by advertisers so much that it’s becoming unusable. Google Chrome, which started as a fast, lean alternative to other browsers have become ridiculously bloated over time. Google has always had great engineering but really poor design. When you’re talking about mass marketing products, they need to be understandable and usable by people other than alpha-geeks.
Google seems to be in a wildly flailing mode, running off in as many directions as they can imagine, then throwing it all up against the wall to see if any of it sticks. How many revolutionary products have they introduced then slowly discontinued? They strike me as being where Microsoft was for the last decade. MS made two incredibly successful products, Office and Windows. Lightning strikes like those are hard to come by, yet stockholders demand that you continually do so. So MS invested wildly in tv set top boxes, watches, anything they could think of that might stick. Google is in the same place. They have one amazingly successful product, search, and one amazingly successful business model, ad sales. Can they beat the odds (like Apple) and find a second revolution?
The Maps debacle was a mis-step by Apple, to be sure. But it was done for solid business reasons, not out of “hubris”. And frankly, they’ve worked just fine for me every time I’ve used them so far (really happy to have turn-by-turn finally), and Google Maps let me down all the time.
But beyond the business side of things and looking at the big picture, yes, it is sad how willingly people accept this invasion of privacy, how little they value themselves that they’re willing to give everything away for a shiny toy. I still maintain that “privacy is the new luxury”. Those who can afford it will keep their lives more and more sequestered.
And no, I don’t want a constant virtual Google layer between me and the real world. I’ve made a conscious effort to limit my screen time. I’ve come to greatly value interactions with real people, and interactions with real objects and places much more than virtual ones. I would much rather have to make an effort to remove myself from reality than have that as the default state. I realize this makes me a luddite refusing to move forward into our Aspergian/WALL-E future, but I can live with that.
So this was originally a much longer article…
Google does seem to have an issue with implementing things. Your point about Android is well taken. I deliberately didn’t mention the whole Walled Garden business, just because it gets people frothing at the mouth. I think Google’s vision is breath-taking, not in the ‘yay Google’ sense, but in the ‘only they would be so bold as to just go do it’. The consequences of that are also breath-taking (both good and bad). My point here is that the ideas and concepts are solid, and if you think (as I do) that publishing is about getting people to stuff, then you should be sitting down and thinking hard about what they are up to.
Apple – reader alert – here is an example of two chefs disagreeing with other (a bit). My UK network has cursed the Maps app loudly. I haven’t upgraded my iPad. My observations of Apple boil down to “Steve Jobs = good and successful. No Steve Jobs= Bad and nearly went bust”. And Steve’s no longer with us. Time will tell. I’m guessing here that my quick sketch around the other players is what’s sticking? The article you linked to is a good one I think, and it doesn’t contradict my argument – Apple are hardware first, things to support that. Google are data first, things to support that. This is probably why they cannot do design very well. Hard to break down aesthetics into numbers.
To go back to your previous comment and the doctors. I vividly remember when I was at what became BioMed Central, noting the massive penetration of Palm PDAs (remember them?) in the medical community. I was curious as to why. It was explained to me thus. Doctor pulls out PDA to look at drug interaction info or recommendation info, or treatment protocols – patients and relative are impressed and ‘happy’ as the doctor is obviously a cutting edge sort of person. Doctor pulls out a well-thumbed book with the same information and everybody worries that they’ve got the person who doesn’t know what they are doing. Anecdotally, Medics at the cutting edge have a fine eye for tech that helps them get things done.
Fair enough. There’s a killer app though, behind the Docs using Palm Pilots (now evolved to iPhones and iPads), and that’s the instant access to the information, the portability and the ability to instantly update a patient’s records. The glasses, at least so far, don’t seem to provide a similar killer app, other than overlaying advertisements on everything the doctor sees (“this fellow looks glum, how about some Zoloft?”). They’re in many ways a solution looking for a problem, and very often that’s not a successful approach (see the Segway as an example). I could be wrong though, Lord knows I’ve been wrong before.
I think the whole Jobs = God thing is a bit oversimplified and absurd. Jobs oversaw all sorts of massive failures, including the overpriced NEXT, the cracked Cube, the miserable MobileMe, and of course, everyone’s favorite social network, Ping. No company has a perfect record, and you just have to mix in a massive win every now and again to make people forget about a lot of failures.
And I’ll conclude with a quote on walled gardens from science fiction writer Warren Ellis:
“Several people have asked me if I’ll post links to the columns on Tumblr, because they actually kind of only read Tumblr. On the other hand, engagement levels on Tumblr are terrible — it’s like LiveJournal back in the ’00’s, where a lot of people were really resistant to clicking any link that took them out of LiveJournal. People like walled gardens. It’s why people invented actual walled gardens in the first place. Using that as a pejorative, in digital culture, misses a huge sociological point. It also misses Facebook.”
There’s a killer app though, behind the Docs using Palm Pilots (now evolved to iPhones and iPads), and that’s the instant access to the information, the portability and the ability to instantly update a patient’s records. The glasses, at least so far, don’t seem to provide a similar killer app, other than overlaying advertisements on everything the doctor sees (“this fellow looks glum, how about some Zoloft?”).
It seems to me that, for doctors in particular, the killer innovation behind Google Glass goggles is the fact that you don’t hold them in your hands–the better to get information while administering to your patient. If there’s a quantum difference between phones/tablets and glasses, that’s it. It’s not the content or the service or the speed–it’s the 50% increase in availability of opposable thumbs.
But that’s not how they work, as far as I can tell. There’s a limited set of controls on the arms of the glasses, then, as David notes in the post above, you’re still using your hands either directly on the device or on some other device where the actual interface lies. You’ve just separated that from the display. Presumably there will be some point where things like eye movements and voice commands would let a device be completely hands free, but I’m not sure I’d hold my breath.
I don’t think the interaction system has been finalised. My understanding is that the first release of Google Glass will be a develper test version. David Pogue at the New York Times, indicated that a limited set of eye tracking ‘gestures’ were up and running. Currently there are some buttons and touch surfaces on the side of the glasses. Voice commands are up and running I think. What the final UI will be… Well given the public timescale, I reckon, they are banking on Moores law to enable some things that they can’t do right now. My point about using the additional device is more about the processing power on the phone and then on the cloud, rather than having to also use the phone/tablet as well.
Rick, yes an increase in usable thumbs is spot on, that’s why this is all about the devices getting out of the way.
David, if Siri works for a device that’s held in your hand, why can’t it work for a device that’s perched on your nose?
Rick–I suppose it depends on how you define “works”. I’m not all that impressed with Siri, and like most users, I played around with it for a few days when it came out, then pretty much stopped using it once I realized that it was less accurate and efficient than other interface methods. I can click a button or a link a lot faster and a lot less awkwardly than I can ask an imaginary person to click that link for me.
But even with an improved Siri, or the sorts of interfaces David describes in his responses, I worry that you’re creating a disconnect between doctor and patient. There needs to be a human relationship between the two, and much of a doctor’s job is to listen to the patient, and make them feel that someone cares and is taking care of them. If you add artificial layers between the two parties, that removes from the interaction. Is your doctor really listening to you or is he looking up his stock prices on his glasses?
I think there’s value in the doctor having to make an acknowledged break from that interaction to look something up. If you’ve ever had dinner with someone who texted throughout the meal, you get a sense of what I mean.
David C: If a doctor is, for example, performing surgery on me, I want him to have the most immediate and complete access possible to any information that will help him do it better; ideally, I’d like him to be able to access that information while still holding on with both hands to whatever part(s) of me he needs to be manipulating. That capability is much more important to me than the quality of our personal relationship. As for Siri’s capabilities: I agree that they are currently fairly rudimentary. Siri is also very young. I see no reason not to expect that voice recognition will improve very quickly.
A doctor performing surgery is a different matter, as the patient is unconscious, so there’s less worry about creating a personal disconnect with them. But frankly, I’d rather have someone else in the room looking up the reference material for the person with the knife in their hands. But I can see the value in having an overlay of an MRI or CAT scan projected onto glasses as one operates.
That’s a bit of a niche market though. Not sure how many ads Google would have to project onto that overlay to make back their investment…
For what it’s worth, David Chartier has put together a list of things that happened on Steve Jobs’s watch:
http://davidchartier.com/an-unordered-list-of-things-that-happened-on-steve-jobss-watch
@ D Crotty – I was lucky enough to go to Eyeo conference this year and see John Underkoffler (creator of the Minority Reports UI). He was quick in his talk to suggest that the tablet touch interface was actually a backwards step from the keyboard. The use of gesture, voice and eye movement is the advancement of the user interface and not touch, and head up displays (HUDs) are the next logical display.
However, putting aside the use of technology, @ D Smith in his original article above touches on an important point, the contextualising of information to the exact requirement of the user. What information providers will have to be careful with is that just because I am using a mobile device, does not mean that 100% of the time I want “local” information. Karen McGrane wrote a lovely piece on this earlier this year http://karenmcgrane.com/2012/07/10/mobile-local/
Charlie Rapple in her ALPSP 2012 presentation also was keen to point out that the integration of both the device and the information in the workflow of the scientist would be key to the adoption of the technology.
Personally, I can not wait until I have a pseudo integrated device so that I can go about my daily routine without having to pull out a phone and look at a screen. For it to be on a HUD would match what I have available in my snowboarding goggles. I don’t wear glasses normally, but would happily if it meant carrying less devices around. I am already wishing that my London underground app that tells me what carriage number and side of carriage I should be on to quicken my journey for transferring platforms and exits, was on a HUD.
All being said I am happy to utilise location services and alike I still hesitate on providing location data back to any service. For everything to work appropriately are peoples reservations on privacy going to have to change?
I would think that the touch interface, with its reliance on specialized gestures, would be more of a stepping stone toward the sorts of Minority Report interfaces, helping make the transition from actual touch to touching and moving virtual objects.
But I think you bring up an area where Google’s approach may be unwelcome in the research community. Much of what Google is doing is attempting to be predictive. As you note, when you connect via mobile device, Google tries to give you what it thinks you want. Similarly, their search engine trains itself over time to offer you results that are similar to what you’ve clicked on in the past. That “Filter Bubble” can be the exact opposite of what a researcher wants when they’re trying to break new ground.
It’s a reason I’ve switched from Google to DuckDuckGo (http://www.duckduckgo.com) as my default search engine. They don’t track, and instead offer results based on best possible match for my search terms, rather than weighting things toward what Google thinks I want to see.
More on privacy and selling your information for a pittance here:
http://blogs.hbr.org/cs/2012/10/a_penny_for_your_privacy.html
Google’s mission is to monetize knowledge; advertising is just one way of accomplishing it. They offer an attractive teaser, free services, but in return they gain access to the intimate details of our activities. Apple’s moves on maps, the publisher’s and author’s lawsuits and the general resistance to Google TV on the part of the entertainment industry all represent push back from industries that do not want to lose control of the family jewels.
One lesson from the history of technology, “How the mighty fall.” Remember when IBM was the monopoly that was going to stifle innovation by dominating all of information technology. Then there was this start up called Microsoft that grew to become another world dominating going to stifle innovation monster; rarely even mentioned as a trend setter in computing these days. Apple is now the world’s most valuable company, but the maps fiasco suggests that they too may be losing their vision and breaking the bond with their customers that was essential to their growth. Facebook’s and Zynga’s stunning ascent and descent suggests that they were largely Wall Street hype to fleece the unsuspecting. What is the intrinsic value in helping people to waste time? Where is Google on this curve? They do offer real value to many users, but the push back is also real. Time will tell whether they achieve a sustainable balance or flame out.
Google Glass offers a compelling vision of linking content intimately with application in the real world. The future of academic publishing is not in attempting to link science to real world application; that has never been a strength of academics. Rather the core value of the the academic literature is to maintain the integrity of knowledge. With all of the pseudoscience cults, industry led disinformation campaigns and other assaults on knowledge, this is a nontrivial mission. The proliferation of low impact publishing venues is a step backward that will hurt the entire industry. The open access versus proprietary fight is a distraction. Both sides need to focus on defending quality in an increasingly hostile environment.
With reference to part of your final para. One of the things I’ve seen recently, is Anita De Waard’s work at Elsevier on text mining the scholarly corpus. I following the slides (as best I could!) it seemed to me that there’s some interesting possibilities in being able to augment the scholarly literature beyond some simple keywords and concepts. If it’s all describable, this data driven approach I’m trying to describe could help you read a paper outside of your field (or in a foreign language). The idea of assertions driving queries to assist in verification is an attractive one. That would be a real world application in an academic context I think? http://www.slideshare.net/anitawaard/how-scientists-read-how-computers-read-and-what-we-should-do