For many of us, augmented reality is primarily associated with gaming and other forms of online entertainment. But it is also increasingly being used in scholarly publishing — in expected and unexpected ways. Springer Nature has been experimenting with this, and anyone who visited their booth at this year’s Frankfurt Book Fair (FBF) had the opportunity to experience the results firsthand. In this interview, their Senior Manager of Semantic Data, Markus Kaindl, and Head of Innovation, Martijn Roelandse, answer some of my questions about mixed realities, virtual reality, and augmented reality in scholarly publishing and tell us about some of their work in this area.

virtual reality glasses in laboratory

Let’s start with the basics! What exactly are mixed realities, virtual reality, and augmented reality and how do they differ from each other?

MK: The key differentiating factor is the level of immersion a user experiences when engaging with these technologies. Let’s start with virtual reality (VR) that fully immerses the user in an entirely artificial digital environment. Adding reality into the mix, you get augmented reality (AR), which overlays virtual objects in a real-world environment setting. Finally, mixed reality (MR) doesn’t just overlay the real world, but also anchors virtual objects that you can move around in the real world.

MR: Each one of these has distinct use cases. For our own experiment at FBF we used VR to create a virtual world where you can read your book. For AR you can think about heads-up displays in cars and probably the most well-known example of mixed reality is Pokemon Go.

You recently participated in a hackathon at the Microsoft Reactor, on Data Visualization & Mixed Reality in Research Publishing. Data visualization is already somewhat well established in scholarly communications — are these other forms of technology essentially an extension of this?

MK: Yes, they can be an extension to established data visualization methods, but in a much more intuitive, dynamic and, intriguing way that is not possible in regular two-dimensional approaches (be it the visualization of a molecule on paper or even rotating on a screen). But more than that, by adding dimensionality and virtuality, this technology can actually enable applications that wouldn’t have been possible previously — like walkable knowledge graphs or a fully holistic experience when reading an article or chapter on paper.

MR: Just imagine you’re a neuroscientist back in the day (like myself) and you’re trying to figure out the 3D structure of a neuronal structure. Virtually impossible! The solution then was serial sections, tracing the outlines, and reconstructing the vectors in a wireframe. Some people went rogue and appealed for a crowdsourcing approach to the tedious work of outlining all synapses, and converted that to Eyewire, a game to map the brain. But in the end it was still just projecting something 3D onto a screen. Now, by using for example, ConfocalVR, you can look at a structure in VR together with your colleague while also discussing your research question.

Given what I assume are the high costs of developing, implementing, and using this technology,  what do you see as the greatest benefit of using this technology in scholarly publishing?

MK: The collaborative environment that allows multiple researchers to work together in the lab, the enriched reading experience of authors when doing their research, and the delighted visualization experience when combining multiple data sources in a truly new and unique way — all are examples of why scholarly publishers should engage with this new emerging technology to figure out where their customers can benefit most.

MR: And actually the costs are dropping very quickly. All of these new technologies are becoming mainstream. For example, you can get a VR headset if you spend enough money in a supermarket so you can watch your favorite talent show in VR. The same is true for our industry. Warp Industries have created an easy-to-use storyboard application where you can script your own e-learning or training on the fly, and after a day’s recording with a 3D camera they’ll have your own VR training module ready for a very acceptable price.   

Are there particular disciplines or fields that are more likely than others to adopt and benefit from this technology? Why (or why not)?

MK: In the lab or a clinical trial scenario, where researchers need both their hands free but still rely on relevant information being displayed, this technology is going to be crucial. Practicing doctors might be another target group that could hugely benefit from this. The technology will most likely be more used in the STM disciplines, which are heavily dependent on the visualization of data, and less so in some humanities disciplines such as history and philosophy. However, we’ve seen some fantastic results in education, such as “Mathland: Play with Math in Mixed Reality” from the MIT Media Lab, and this very simple demo of how atoms unite to compounds, which could help pupils to learn chemistry in an intriguing way. Psychology is an important area of focus as well, e.g., trying to understand how our memory works with the aforementioned “neuroBook” or the MIT Media Lab using AR for memorization.

MR: Indeed “the lab” is the most likely place for adaptation of this technology. Also the ‘hacks’ that were created during our Hackathon ranged from a real-time augmented reality overlay that aids in the discovery of scientific papers related to objects in the world around us, to VR visualization of and interaction with protein structures, to a mixed reality globe that highlights data about the world’s coral reefs, and more. However I would think that applications within the psychology and behavior space would also work well, as to me a VR training like the one described in the previous question, are way more “in your face” as e-learning on a screen.

Can you share some examples of how this technology is already being used — at Springer Nature and beyond?

MK: At Springer Nature we have implemented a first prototype of spatial reading, allowing users to browse and read a book in virtual reality, as demonstrated at this year’s FBF. While visualization of proteins, targets, and hormones in lab environments is starting to get established, we also see other disciplines like geology and earth sciences building applications to explore soil layers in virtual cave setups, for example.

What are the main barriers to adoption — both by publishers and by researchers?

MK: Initially I would have said the price, as it was quite expensive to get started. But with more and more suppliers entering the market, and big companies like Microsoft, Facebook, and Google making their hardware, software, and experience available, this technology is getting affordable. Now the biggest challenge really seems to be applying it to convincing use cases, and delivering the right data to the right users at the right time.

MR: Price indeed, but probably also psychology. Do I feel comfortable wearing such a headset and does it bring me enough added value? This is probably where Google Glass failed as there wasn’t much to do with it other than to spy on others…

And what are the opportunities, for example, in terms of making research results more accessible?

MK: The biggest opportunity is the interactivity and reactiveness of applications when applied to the grand challenges of humankind (like humanitarian aid, environment, health, and an inclusive world). During the Hackathon we saw prototypes being developed that visualized Springer Nature publications about dying coral reefs on the globe; imagine an overlay with their health, funding, and protection status as well.

MR: In my opinion the biggest opportunity is twofold. First is adding a layer of information in a mixed reality. This has a huge potential in the applied space like complex mechanics, surgery, etc. The opportunity for scholarly publishing is probably more in providing the content for the information. Second is in education, where I think we have only just started to scratch the surface. This ranges from pre-school education to training high-level interactive medical doctors, with remote training and support for distance learning also within scope. Finally this technology also offers students the possibility to “touch” and manipulate objects, generating a greater understanding of them, as well as the ability to interact with data sets, complex formulae, and abstract concepts.  

Realistically, how well established do you think this technology will be in scholarly publishing within, say, five years time?

MK: I believe in five years time it will become increasingly common to read a paper while additional information is being displayed as part of a mixed reality experience, just because the technology will be cheaper and simpler to use. Furthermore, in the area of teaching and learning, the effects of virtual and augmented reality will be widespread and well established. In scholarly publishing overall, I don’t think traditional ways of writing and consuming will be substantially challenged, but the experiences around it will be made a lot more delightful and immersive.

MR: I fully agree with Markus, as said, mostly in enhancing the content and open up new opportunities in education.

Disclaimer: opinions expressed by MK and MR are their own and not necessarily reflect that of their employer Springer Nature

Alice Meadows

Alice Meadows

I am a Co-Founder of the MoreBrains Cooperative, a scholarly communications consultancy with a focus on open research and research infrastructure. I have many years experience of both scholarly publishing (including at Blackwell Publishing and Wiley) and research infrastructure (at ORCID and, most recently, NISO, where I was Director of Community Engagement). I’m actively involved in the information community, and served as SSP President in 2021-22. I was honored to receive the SSP Distinguished Service Award in 2018, the ALPSP Award for Contribution to Scholarly Publishing in 2016, and the ISMTE Recognition Award in 2013. I’m passionate about improving trust in scholarly communications, and about addressing inequities in our community (and beyond!). Note: The opinions expressed here are my own

Discussion

12 Thoughts on "Mixed Realities, Virtual Reality, and Augmented Reality in Scholarly Publishing: An Interview with Markus Kaindl and Martijn Roelandse"

Thank you for this informative interview.
Some years ago, Springernature displayed Smartbooks. Whatever happened to them?
Is there a reason why some publishers are good at displaying pilots, but never really came up with a full-fledged product implementation? Would this be the fate of these AR/VR/MR ‘projects’ too?
In my opinion, STM publishers are stuck with a rigid business model where they pack in all possible innovative products and expect the same to work out. It is time to have business model specialization based on the product and not the other way round.

In any case, wishing for the project’s success.

This AR/VR example is more analogous to a car manufacturer producing a concept vehicle. These never see production, but the idea is to experiment with new ideas, to test capabilities and to provoke discussion. Even when you are truly ‘piloting’ something (the difference being that a pilot is usually intended to morph into a long term product), the aim is always to test viability. It’s not too surprising that most pilots never go forwards. Failure is an integral part of advancement.

Correct. However, our industry has yet to see a real radical advancement. Our end product is still textual and has to be paginated. Which pilot, done by publishers (not startups), has then moved forward at all?

We indeed offered smartbooks (our name for books enriched by some multimedia tools and content) for a few years, but the market never really accepted it. Sales were very low, so we stopped the project. Maybe we were too early, maybe we did not have the perfect business model for it, but in the meantime we’ve introduced other technologies that we are using for the same purpose. Actually, we have much more multimedia elements in our books now, we just don’t call it “smartbook” any more.

Thanks Alice, Martijn and Markus, excellent post, it will be interesting to watch/see how this space develops. Is there a video of the virtual reading room experience that can be shared ?

Btw, this development was driven by Niels Peter Thomas, back then Chief Book Strategist and now Managing Director Books at Springer Nature. He had also presented this at FutureBooks in London last November…

Thanks, Alice, Martijn, and Markus. Very interesting article. We’ve been developing AR & VR experiences for the last couple of years. The hardware has really improved and the costs have come down. Same for the development process. More developers have the necessary experience to produce amazing experiences. Beyond the ‘wow’ experiences that we create in VR we see AR having the most commercial use since the number of devices – our phones – is in the billions. MR will be the big winner over time since it will become ubiquitous in the IIOT 4.0 – the Industrial Internet of Things. Workforce development, maintenance, design, facility management, and more.

Great piece. Thanks Alice, Martijn and Markus. Great to hear your insights into VR in scholarly publishing. I work for SAGE Ocean, an initiative from SAGE Publishing to help social scientists navigate vast data sets and work with new technologies. https://ocean.sagepub.com/. We’re running an event next month exploring how VR is changing the research landscape in the social sciences. More info available via the link for anyone interested in joining: https://www.eventbrite.co.uk/e/future-or-fad-vr-in-social-science-research-tickets-53739996777

Regarding Google Glass only being useful for spying on others, this is completely false. Applications like Fieldtrip used geo-location to provide images and information, delivering an environment much like the Mixed Reality described in this post. I agree that Google flubbed the initial release. The software was glitchy and not yet ready for consumer release, but Glass has continued to flourish in certain settings such as warehouses (where workers need hands-free access to information) and in the medical environment (for bringing distant colleagues a surgeon’s eye view). There are also projects to enable those on the autism spectrum to better recognize facial expressions that denote emotional responses.
I’m disappointed that this type of casual throw-away remark wouldn’t be challenged more in the course of the editing of this post.
More on the Enterprise edition can be found here: https://developers.google.com/glass/distribute/glass-enterprise
More on the Autism Glass Project here: http://autismglass.stanford.edu/

Thanks for the comments Heather, and admittedly, there should have been a little 😉 at the end of the line on spying, mea culpa.

But don’t get me wrong, I think Google Glass was a great invention, truly mesmerising and pushing the boundaries, but time wasn’t right for it as yet. As said, do I feel comfortable wearing such a headset and does it bring me enough added value, I think the answer at that time was “no” for most people. I am pleased to hear that it did find a niche market in settings as warehouses and medical environment, but as an innovation it failed as it deviated from expected and desired results for Google.

Mind you, Hololens will have the exact same challenge; will people feel comfortable enough to wear it. On the plus side, it is a bit more rigid as the GG but wearing such a think, I think I did look quite a bit like a dork 🙂

I think the intersection of GG and Hololens will appear in 2 or 3 years. Everyone will be wearing thick-framed, semi-dorky, glasses that hide their considerable AR computing power.

Comments are closed.