Augmented Reality: Merging The Virtual and Real Worlds
You just signed a lease on a new apartment. You’re standing in the middle of an empty living room, hands on your sides, wondering what would look how where. It’s 2019, so you whip out your phone, and, if somebody has put you on it, you open up the Ikea Augmented Reality App, Ikea Place. You scan your room-to-be as if you were taking a panorama and all of a sudden you are able to drop anything from the IKEA catalog into your living room, using your camera’s image feed. Your phone’s camera then essentially grants you access to an augmented reality where virtual objects, scaled to full size, exist and can be manipulated at will. You can now perfectly visualize exactly how a Läck or a Fjållbō look in that corner you have no idea what to do with.
The concept of “augmented reality” as it is now known first hit popular media when, in 2012, Google released this video, promoting its then freshest product, Glass. The glasses would take in information from the outside world, primarily your movements and gestures, and adapt its interface to display information as if laid over reality. Nowadays, AR technology has a much deeper understanding not only of gestures but of the user’s surroundings and adapts itself to match it in real time. Essentially, then, it must perform a two-step dance: it must first make the world it sees analyzable, to then adapt and impose its contents upon it. Ikea Place digests your living room’s layout and places its products within it. This allows you to experience, whether through a visor or a screen, a reality that is seamlessly mixed between that which the tech takes in and that which it creates.
The question in the industry then becomes exactly which parts of the environment are to be analyzed, and what content is to be produced for them. Ah, freedom of creation, one might say. Sure, but this seemingly endless list of possible applications renders augmented reality’s impact on meaningful research rather uncertain. International think-tank and legal giant Perkins Coie LLP predicts that in 2018, 59% of all investment in the 108 billion dollar AR and VR market will be devoted to gaming endeavors, with educational and medical applications on the lower steps of the podium by a considerable margin. It is no coincidence, then, that after taking the world by storm in 2016, the most popular AR app on the consumer market remains Niantic’s Pokémon Go. Given the obvious appeal of mixed reality entertainment and these clear market indications, we run the risk of gearing our developmental efforts towards creating a product, rather than a tool.
Now, that is not to say that more highbrow applications of augmented reality are unfeasible or inexistent. Led by Max Catalan, researchers from the Chalmers University of Technology developed, for example, an augmented reality application for amputees with phantom limb pain, a largely untreatable condition. Patients whose brains struggled to control a non-existent limb found “immense” relief when, with sensors connected to their stumps, they could visualize their intended movements on a virtual limb produced by AR. Within the same register, AR is also already being used to perfect complex neurosurgical practices. Optics giant Leica produces the AR GLOW system, which, applied to a microscope, tracks blood flow in real time to recognize, localize, highlight and provide precise information about the tiny junctions a neurosurgeon must operate upon.
The threat of over-reliance casts a cloud of uncertainty over the debate. Even within the educational field, where developers have perfected applications centered around the visualization of difficult concepts, is not immune. AR Chemistry, an app developed by Paradox, helps visualize atomic structures and understand chemical interactions. Despite a clearly positive response from students, multiple surveys collected by the Human Interface Technology Lab revealed students would not want to return to traditional teaching materials. In the same way, architectural and industrial applications of the technology help architects, assemblers and social planning scientists alike in the visualization of their individual goals.
Cityscope, an MIT Media Lab creation, takes real-time video from the urban landscape and processes it to create massive virtual, three-dimensional simulations of traffic congestion in specific places while showing the scientist different paths to a solution. Despite the over-reliance threat, applications like Cityscope cannot help but leave the world in awe of the enticing tool augmented reality can be for society.
So much so that even the Mars exploration team at NASA has incorporated AR in its InSight Mars Lander. The robotic lander is in fact equipped with precise cameras, sensors and diggers, to measure and understand how the structure of a planet evolves through time. InSight will measure the planet’s thickness, density, and how easily it loses heat, to bring these metrics together in real-time, virtual model of the planet’s structure. Essentially, InSight’s AR will weave together all of the information it collects from its surroundings and, like all applications of AR, impose its contents on it. A live, real-time simulation of Mars’ deeper structures, which we can study right from the comfort of a living room.
The computational and creational power of the technology are undeniable – Yet, we decide how and what the tech “sees”, and what it must do with what it sees. In the age of machine learning, where the majority of our habits are recorded, AR’s data must be properly managed. Personal or handheld applications would create a live-feed of personal and contextual data. Let’s take Vuforia’s AR, which recognizes and labels objects around you. Without a doubt, immediate object recognition would add an immediate layer of cognition to our understanding of the world. With a large enough user base, we may be able to pool the data and ultimately scratch at new concepts of cognition. On the other hand, as with most digital services today, constant analysis of our personal world will, as it already is today, raise massive concerns regarding the privacy of the data and its management.
Then, like with any new technology, we must take an active role in steering the development of augmented reality onto the righteous path. Its applications catalog constantly expanding, augmented reality walks the line between being a product and tool. We can adapt its use to the customs of industries like the gaming and entertainment ones, crafting a product, or we can adapt its use to research and education, crafting a proper tool. Luckily, we do not have to choose. We can produce our new couch out of thin air, sit in it, and from it visualize distant “marsquakes” as if we stood on the red giant’s surface. Quite a leap, indeed, but one we must make with caution and responsibility.