For many of us, augmented reality is primarily associated with gaming and other forms of online entertainment. But it is also increasingly being used in scholarly publishing — in expected and unexpected ways. Springer Nature has been experimenting with this, and anyone who visited their booth at this year’s Frankfurt Book Fair (FBF) had the opportunity to experience the results firsthand. In this interview, their Senior Manager of Semantic Data, Markus Kaindl, and Head of Innovation, Martijn Roelandse, answer some of my questions about mixed realities, virtual reality, and augmented reality in scholarly publishing and tell us about some of their work in this area.
Let’s start with the basics! What exactly are mixed realities, virtual reality, and augmented reality and how do they differ from each other?
MK: The key differentiating factor is the level of immersion a user experiences when engaging with these technologies. Let’s start with virtual reality (VR) that fully immerses the user in an entirely artificial digital environment. Adding reality into the mix, you get augmented reality (AR), which overlays virtual objects in a real-world environment setting. Finally, mixed reality (MR) doesn’t just overlay the real world, but also anchors virtual objects that you can move around in the real world.
MR: Each one of these has distinct use cases. For our own experiment at FBF we used VR to create a virtual world where you can read your book. For AR you can think about heads-up displays in cars and probably the most well-known example of mixed reality is Pokemon Go.
You recently participated in a hackathon at the Microsoft Reactor, on Data Visualization & Mixed Reality in Research Publishing. Data visualization is already somewhat well established in scholarly communications — are these other forms of technology essentially an extension of this?
MK: Yes, they can be an extension to established data visualization methods, but in a much more intuitive, dynamic and, intriguing way that is not possible in regular two-dimensional approaches (be it the visualization of a molecule on paper or even rotating on a screen). But more than that, by adding dimensionality and virtuality, this technology can actually enable applications that wouldn’t have been possible previously — like walkable knowledge graphs or a fully holistic experience when reading an article or chapter on paper.
MR: Just imagine you’re a neuroscientist back in the day (like myself) and you’re trying to figure out the 3D structure of a neuronal structure. Virtually impossible! The solution then was serial sections, tracing the outlines, and reconstructing the vectors in a wireframe. Some people went rogue and appealed for a crowdsourcing approach to the tedious work of outlining all synapses, and converted that to Eyewire, a game to map the brain. But in the end it was still just projecting something 3D onto a screen. Now, by using for example, ConfocalVR, you can look at a structure in VR together with your colleague while also discussing your research question.
Given what I assume are the high costs of developing, implementing, and using this technology, what do you see as the greatest benefit of using this technology in scholarly publishing?
MK: The collaborative environment that allows multiple researchers to work together in the lab, the enriched reading experience of authors when doing their research, and the delighted visualization experience when combining multiple data sources in a truly new and unique way — all are examples of why scholarly publishers should engage with this new emerging technology to figure out where their customers can benefit most.
MR: And actually the costs are dropping very quickly. All of these new technologies are becoming mainstream. For example, you can get a VR headset if you spend enough money in a supermarket so you can watch your favorite talent show in VR. The same is true for our industry. Warp Industries have created an easy-to-use storyboard application where you can script your own e-learning or training on the fly, and after a day’s recording with a 3D camera they’ll have your own VR training module ready for a very acceptable price.
Are there particular disciplines or fields that are more likely than others to adopt and benefit from this technology? Why (or why not)?
MK: In the lab or a clinical trial scenario, where researchers need both their hands free but still rely on relevant information being displayed, this technology is going to be crucial. Practicing doctors might be another target group that could hugely benefit from this. The technology will most likely be more used in the STM disciplines, which are heavily dependent on the visualization of data, and less so in some humanities disciplines such as history and philosophy. However, we’ve seen some fantastic results in education, such as “Mathland: Play with Math in Mixed Reality” from the MIT Media Lab, and this very simple demo of how atoms unite to compounds, which could help pupils to learn chemistry in an intriguing way. Psychology is an important area of focus as well, e.g., trying to understand how our memory works with the aforementioned “neuroBook” or the MIT Media Lab using AR for memorization.
MR: Indeed “the lab” is the most likely place for adaptation of this technology. Also the ‘hacks’ that were created during our Hackathon ranged from a real-time augmented reality overlay that aids in the discovery of scientific papers related to objects in the world around us, to VR visualization of and interaction with protein structures, to a mixed reality globe that highlights data about the world’s coral reefs, and more. However I would think that applications within the psychology and behavior space would also work well, as to me a VR training like the one described in the previous question, are way more “in your face” as e-learning on a screen.
Can you share some examples of how this technology is already being used — at Springer Nature and beyond?
MK: At Springer Nature we have implemented a first prototype of spatial reading, allowing users to browse and read a book in virtual reality, as demonstrated at this year’s FBF. While visualization of proteins, targets, and hormones in lab environments is starting to get established, we also see other disciplines like geology and earth sciences building applications to explore soil layers in virtual cave setups, for example.
What are the main barriers to adoption — both by publishers and by researchers?
MK: Initially I would have said the price, as it was quite expensive to get started. But with more and more suppliers entering the market, and big companies like Microsoft, Facebook, and Google making their hardware, software, and experience available, this technology is getting affordable. Now the biggest challenge really seems to be applying it to convincing use cases, and delivering the right data to the right users at the right time.
MR: Price indeed, but probably also psychology. Do I feel comfortable wearing such a headset and does it bring me enough added value? This is probably where Google Glass failed as there wasn’t much to do with it other than to spy on others…
And what are the opportunities, for example, in terms of making research results more accessible?
MK: The biggest opportunity is the interactivity and reactiveness of applications when applied to the grand challenges of humankind (like humanitarian aid, environment, health, and an inclusive world). During the Hackathon we saw prototypes being developed that visualized Springer Nature publications about dying coral reefs on the globe; imagine an overlay with their health, funding, and protection status as well.
MR: In my opinion the biggest opportunity is twofold. First is adding a layer of information in a mixed reality. This has a huge potential in the applied space like complex mechanics, surgery, etc. The opportunity for scholarly publishing is probably more in providing the content for the information. Second is in education, where I think we have only just started to scratch the surface. This ranges from pre-school education to training high-level interactive medical doctors, with remote training and support for distance learning also within scope. Finally this technology also offers students the possibility to “touch” and manipulate objects, generating a greater understanding of them, as well as the ability to interact with data sets, complex formulae, and abstract concepts.
Realistically, how well established do you think this technology will be in scholarly publishing within, say, five years time?
MK: I believe in five years time it will become increasingly common to read a paper while additional information is being displayed as part of a mixed reality experience, just because the technology will be cheaper and simpler to use. Furthermore, in the area of teaching and learning, the effects of virtual and augmented reality will be widespread and well established. In scholarly publishing overall, I don’t think traditional ways of writing and consuming will be substantially challenged, but the experiences around it will be made a lot more delightful and immersive.
MR: I fully agree with Markus, as said, mostly in enhancing the content and open up new opportunities in education.
Disclaimer: opinions expressed by MK and MR are their own and not necessarily reflect that of their employer Springer Nature