Over the past several decades, the relationship between human input and computer input has been well explored. It even has a widely studied discipline known as human computer interaction or HCI. Human input happens through a variety of means including keyboards, mice, touch, ink, voice, and even Kinect skeletal tracking.
Advancements in sensors and processing are giving rise to a new area of computer input from environments. The interaction between computers and environments is effectively environmental understanding, or perception. Hence the API names in Windows that reveal environmental information are called the perception APIs. Environmental input captures things like a person’s position in the world (e.g. head tracking), surfaces and boundaries (e.g. spatial mapping and spatial understanding), ambient lighting, environmental sound, object recognition, and location.
Now, the combination of all three – computer processing, human input, and environmental input – sets the opportunity to create true mixed reality experiences. Movement through the physical world can translate to movement in the digital world. Boundaries in the physical world can influence application experiences, such as game play, in the digital world. Without environmental input, experiences cannot blend between the physical and digital realities.
What is Mixed Reality?
Mixed reality is the result of blending the physical world with the digital world. Mixed reality is the next evolution in human, computer, and environment interaction and unlocks possibilities that before now were restricted to our imaginations. It is made possible by advancements in computer vision, graphical processing power, display technology, and input systems.
The term mixed reality was originally introduced in a 1994 paper by Paul Milgram and Fumio Kishino, “A Taxonomy of Mixed Reality Visual Displays.” Their paper introduced the concept of the virtuality continuum and focused on how the categorization of taxonomy applied to displays. Since then, the application of mixed reality goes beyond displays but also includes environmental input, spatial sound, and location.
Types of Mixed Reality
Mixed Reality (Continuum)
This spectrum (i.e. Mixed Reality Continuum) covers all possible variations and compositions of real and virtual objects. On the spectrum, beginning from far-left, is the natural world where nothing is computer generated. The most-right point on the spectrum, is the virtual environment where everything is computer generated. Below, we explore the various types of reality technologies that make up this spectrum:
Mixed Reality (Independent)
Mixed reality, either as a standalone concept or used to refer to the entire spectrum of situations between actual reality (i.e. real world) and virtual reality, attempts to combine the best of both virtual reality and augmented reality. When both real and virtual worlds are merged together, new environments and visualizations become possible where physical and digital objects can coexist and interact in real time.
Real environment (also called “natural environment”) refers to the natural world we consume everyday. This natural environment encompasses all living and non-living things occurring naturally on Earth. Consequently, most virtual environments are modeled after real environments. Re-creating virtual representations of real environment objects (e.g. people, natural landscapes) allow for deepened levels of immersion into virtual worlds.
Augmented reality brings aspect of the virtual world into the real world. It is closer to the real environment, as opposed to virtual environments, in the spectrum of reality technologies. This is because augmented reality users remain in the real world (i.e. natural environment) while experiencing enhanced virtually created visuals, aurals, and feelings. Augmented reality does this by layering virtual information and/or graphics on top of a user’s view of a real world scene.
Augmented virtuality describes the environment in which real objects are inserted into computer-generated virtual environments. It is best described as the inverse of augmented reality, where real world objects are layered over virtual environments. An example of augmented virtuality is the kitchen remodeling scenario. By utilizing augmented virtuality technology, a homeowner could visualize and interact with virtual appliances and easily manipulate different layouts, in a digital representation of their current kitchen.
Virtual reality seeks to provide users with the greatest level of immersion: total-immersion. This deepened level of immersion is distinct from other types of reality technologies. The total immersion experienced in virtual reality requires stimulation of all of the user’s senses in a fully immersive virtual experience, to the extent that the brain accepts the virtual environment as a real environment. In a virtual reality environment, users inhabit a completely synthetic world may or may not mimic the properties of a real-world environment.
Why Mixed Reality?
A topic of much research, MR has found its way into a number of applications, evident in the arts and entertainment industries. However, MR is also branching out into the business, manufacturing and education worlds with systems such as these:
- IPCM – Interactive Product Content Management
Moving from static product catalogs to interactive 3D smart digital replicas. Solution consists of application software products with scalable license model.
- SBL – Simulation Based Learning
Moving from e-learning to s-learning—state of the art in knowledge transfer for education. Simulation/VR based training, interactive experiential learning. Software and display solutions with scalable licensed curriculum development model.
- Military Training
Combat reality is simulated and represented in complex layered data through HMD.
- Real Asset Virtualization Environment (RAVE)
3D Models of Manufacturing Assets (for example process manufacturing machinery) are incorporated into a virtual environment and then linked to real-time data associated with that asset. Avatars allow for multidisciplinary collaboration and decision making based on the data presented in the virtual environment.
- Remote working
Mixed reality allows a global workforce of remote teams to work together and tackle an organization’s business challenges. No matter where they are physically located, an employee can strap on their headset and noise-canceling headphones and enter a collaborative, immersive virtual environment. Language barriers will become irrelevant as AR applications are able to accurately translate in real time. It also means a more flexible workforce. While many employers still use inflexible models of fixed working time and location, there is evidence that employees are more productive if they have greater autonomy over where, when and how they work. Some employees prefer loud work spaces, others need silence. Some work best in the morning, others at night.Employees also benefit from autonomy in how they work because everyone processes information differently. The classic VAK model for learning styles differentiates Visual, Auditory and Kinesthetic learners
Surgical and ultrasound simulations are used as a training exercise for healthcare professionals. Medical mannequins are brought to life to generate unlimited training scenarios and teach empathy to healthcare professionals.
Virtual models are used to allowed scientists and engineers to interact with a possible future creation before it touches the factory floor. These models provide the opportunity to gain an intuitive understanding of the exact product, including real size and constructions details that allow a closer inspection of interior parts. These virtual models are also used to find hidden problems and reduce time and money.
- Functional mock-up
Mixed Reality is efficiently applied in the industrial field in order to build mock-ups that combine physical and digital elements.