Mastering the Human Element of Immersive Training

Immersive 1(Photo by John L. Williams)

By Dr. Gregory Welch, Dr. Arjun Nagendran, Dr. Jeremy N. Bailenson, Dr. Charles E. Hughes,
Pete Muller, and Dr. Peter Squire

“I’VE DONE THIS BEFORE”

A natural disaster has struck a country located in the Pacific region. The U.S. Navy and Marine Corps are on their way, charged with rendering aid and security to a population facing desperate circumstances. In the midst of the chaos, the Marines will have to interact with civilians who are in shock and even angry about their situation. The environment, the people, and the chaotic circumstances would normally be completely unfamiliar to, for instance, a young Marine from the Midwest. Yet before the Marines step off the ship, or even know about this specific disaster, they will have experienced similar settings and interacted with humans in comparable chaotic and emotionally  charged situations—all through a range of immersive training.

The physical terrain cannot be replicated exactly, but a similar atmosphere can be created using the environmental  stimuli (sights, sounds, smells, etc.) that Marines will encounter. Although an entire culture cannot be imported into military training, surrogates (technological  or human substitutes) can replicate human interactions under varying cultural and emotional  situations. And while one cannot predict every situation Marines may face, numerous scenarios can be constructed  for them to experience. From thousands of miles away, before the engagement ever occurs, all a lance corporal may need to do is put on a special pair of glasses and an earpiece, walk into the village, and face the civilians.

Training is a critical part of preparing for any operation such as this one.

Over the past two decades, the Office of Naval Research (ONR) has been at the forefront of developing immersive training capabilities that seek to provide a sense of “presence” for warfighters. Immersion refers to an objective level of fidelity in an environment  or with a human surrogate. Presence refers to a user’s psychological sense of being in an environment  that is, with a human surrogate, often measured by the trainee’s verbal, physiological, and behavioral characteristics.

One example is the Infantry Immersion Trainer (IIT) facility at Camp Pendleton, California, where ONR’s TechSolutions group transformed a former tomato packing plant into a state-of-the-art training facility. The IIT replicates a Middle Eastern village comprised of life-sized physical structures such as apartments, alleys, and a marketplace. It is inhabited by a mix of real human actors, animatronic (robotic) humans, and projected virtual (digital) humans.

The Future Immersive Training Environment (FITE) Joint Capability Technology Demonstration was a three-year, ONR-led initiative aimed at demonstrating  the value of advanced small-unit  immersive infantry training systems. FITE included demonstrations  of animatronic humans and projected virtual humans, but also visually immersive head-worn displays.

THE HUMAN ELEMENT

There is a wide gulf between machine versus human simulation. Today’s flight simulators, for example, are so effective that in some cases it is possible for pilots to do 100 percent of their training in simulators and be certified to fly the real aircraft with real passengers. One reason flight simulators can be so effective is that they are simulating the behavior of a machine made by humans.

Unfortunately,  the detailed “processing” (thinking) and behavior of humans is much more difficult to model. In this article, we focus on practical and effective human surrogates—simulated humans to be used in immersive training of human-human interactions. Human surrogates can be virtual, physical, physical-virtual, and even real.

Virtual human surrogates are realized using computer graphics models of humans and displayed on a large flat panel display, on a projection  screen, or in a head-worn display. The purely computer-generated nature of such virtual humans offers the flexibility to change their apparent sizes, skin tones, personalities, genders, or other qualities. In addition, they can be realized using off-the- shelf computers and display systems and are relatively simple to maintain.

Virtual human surrogates can be used in a stand-alone training scenario. For example, in collaboration with the Defense Equal Opportunities  Management Institute at Patrick Air Force Base, researchers at the University of Central Florida’s Synthetic Reality Lab are developing mediated experiences for equal opportunity training and addressing military sexual trauma. Using our Avatar Mediated Interactive Training and Individualized Experience System (AMITIES) infrastructure, coordinator candidates can interact with virtual military personnel, helping these trainees develop the knowledge and human-to-human skills required to address the needs of potential victims. The ultimate goal is to provide a wide variety of experiences without  involving actual victims who could be severely damaged by interacting with an inexperienced coordinator.

Virtual human surrogates also can be embedded or integrated into a physical environment  designed to mimic a real location. This can be accomplished by embedding displays and screens into the physical structure, as is done at the IIT facility in Camp Pendleton. “Immersive” head- worn displays can be used to replace a user’s view with the dynamic imagery and sounds of virtual environments and virtual humans, and “see-through” head-worn displays can visually overlay virtual humans onto a real scene.

Physical human surrogates include role players (such as paid actors) or human-shaped animatronic robots that have rubber “skin” and are clothed to look and move like specific humans. Compared to purely virtual human surrogates, physical human surrogates occupy a space with a realistic human form. From a training perspective this is interesting, because there is some evidence that proximal humans are typically more engaged with physical surrogates than they are with virtual surrogates. On the other hand, purely physical human surrogates such as Disney-type animatronics will look like the same person until the rubber face “mask” is changed. In addition, the fidelity of facial and body movement is limited by the mechanical design, which cannot be easily altered after being manufactured.

Physical-virtual human surrogates can be realized by combining  dynamic computer  graphics with human-shaped (and potentially dynamic) physical forms. A physical-virtual surrogate comprises a combination of realistic shapes and realistic appearance (color and texture). Such a manifestation shares features of both virtual and physical surrogates; they have realistic physical human shape and size, and they can appear with different races, genders, personalities, etc.

The behavior or “soul” of the human surrogate can be supplied by a computer,  a remote human, or some combination. When controlled autonomously  by a computer,  the surrogate often is referred to as an embodied agent. With support from ONR and the Department of the Army, researchers at the University of Southern California’s Institute for Creative Technologies have been pushing the boundaries of what’s possible with such agents. When controlled by a human, or “inhabiter,” the surrogate is often referred to as an avatar. The AMITIES infrastructure supports a blend of autonomous-human agency, which provides the fidelity and flexibility of a human while minimizing the cognitive and physical demands on the human operator.

Immersive 2

Marines from 3rd Battalion, 1st Marines, confront avatars, or virtual humans, while clearing a room at the Infantry Immersion Trainer located at the I Marine Expeditionary Force Battle Simulation Center at Camp Pendleton, California. (Photo by John L. Williams)

GETTING THE HUMAN SURROGATE RIGHT

There is a widespread and understandable desire to use technology-based human surrogates rather than live role players. Real human surrogates are typically very good at what they do, but they are expensive and not necessarily as controllable (or as consistent and reliable) as one might like. There is, however, some evidence in the literature, as well as anecdotal accounts from the IIT, that subjects often treat technology-based human surrogates (virtual, physical, or physical-virtual) differently from humans. In fact, subjects often do not seem to treat technology-based surrogates as humans, but as tasks or games that must be mastered via a formulaic interaction. The problem is that humans are complex cognitive and emotional  beings, and for most training scenarios such rote behavior is likely undesirable. The sub-human  perceptions of technology- based surrogates are not understood in any systematic way, so there is little or no guidance on what factors are important  for the design and use of technology-based human surrogates.

As a step toward developing formal knowledge guiding the design and use of technology-based human
surrogates, we are undertaking a strategic effort to assess the effectiveness of alternative manifestations under different circumstances. We are carrying out studies where we manipulate the characteristics of the human surrogates in a controlled manner and measure the effects on the human subject (the trainee).

There are three broad characteristics of human surrogates we can manipulate: cognitive characteristics, such as the surrogate’s apparent ability to “think” (e.g., to be reactive or proactive); perceptual characteristics, such as the fidelity of the surrogate’s size and shape, visual appearance, voice, and movement;  and social/cultural characteristics, such as personality, gender, socio-economic status, age, and ethnicity. The chosen surrogate’s characteristics affect the interacting human’s apparent beliefs and illusions, behavior, physiology, thoughts, and trust.

To facilitate the evaluation of the effects, we have created a laboratory-based  test bed comprising various changeable human surrogate forms; an underlying software framework supporting consistent control;  and various mechanisms for measuring the effects on human subjects. Examples of human surrogate manifestations in our test bed include virtual surrogates appearing on projection  displays; virtual surrogates appearing in see-through head-worn displays; animatronic surrogates; a custom-built, physical-virtual surrogate with realistic physical body and a dynamic computer  graphics face; and a commercial  physical-virtual surrogate called the RoboThespian.

To measure the effects on human subjects of our controlled manipulations of the human surrogates, we instrumented our laboratory with systems for wide-area body tracking, eye tracking, heart/pulse sensing, skin conductance response (sweat) sensors, and video/audio  recording and analysis. We are developing a software-based framework for online, real-time  monitoring and statistical analysis of the human-surrogate state and events. This allows us to observe things such as where the human subject is looking while a surrogate is talking and whether the human subject appropriately moves in response to threatening statements or movements by a surrogate.

We are conducting various controlled studies where we manipulate surrogate characteristics and observe the effects on people. For example, we are examining the effects of the “physicality” of the surrogate (virtual vs. physical-virtual); whether or how gestures by the surrogate affect the subject’s perception  of the surrogate; whether it matters if the surrogate visually attends to places of mutual interest (e.g., if the subject points to something, does the surrogate look there?); and how/whether the perceived location and fidelity of the surrogate voice impacts the subject’s perception  of the surrogate. Beyond our own controlled studies, we are also in the process of carrying out a formal meta-analysis of prior research related to human surrogate interactions.

FUTURE CHALLENGES AND OPPORTUNITIES

Our strategic goals with respect to human surrogates include defining an immersive science space where characteristics and guidelines achieve some desired goals, such as levels of empathy, trust, or engagement. This is not simply a matter of cost effectiveness, but also training effectiveness. Over the next few years we will be developing and aligning methods and measures within the lab and training environments, collecting  data, and beginning to develop an overview and guidelines for the design and use of human surrogates.

With respect to immersive sciences more broadly, we want to emulate/simulate  future crisis environments within a training environment  today. We want to understand how to replicate the scenario from the safety of a training facility or a personal training system and determine how best to use that training time to enhance skills. We need to determine how to define the immersive space that could be replicated within a training environment, determine the critical measures needed to assess the variables within that immersive space, and understand the accessibility and functionality of the methods and measures from laboratory to training environment.

The knowledge we develop in this endeavor should influence applications beyond military training. Many other disciplines rely on effective human-human interactions and could benefit from human-surrogate training. For example, school-  teachers need to effectively communicate and interact with an increasingly complex student population, and healthcare practitioners need to understand and empathize with their patients as part of effective diagnosis and treatment.

The challenge before us is to reproduce the cognitive and perceptual characteristics of a human surrogate with such fidelity and consistency that the trainee is not conscious that both the situation and the surrogate are contrived. Instead, we want the trainee to be so engaged with—and perhaps so emotionally  affected by—the other “human” that the Marines or Sailors being trained must focus on managing their own emotions while interacting with the “human” to carry out their jobs.

About the authors:
Dr. Welch is a computer  scientist and professor in the University of Central Florida’s College of  Nursing, Department of Electrical Engineering and Computer Science, and Institute for Simulation and Training. Dr. Nagendran is a research assistant professor at the University of Central Florida’s Institute  for Simulation and Training. Dr. Jeremy Bailenson is founding director of Stanford University’s Virtual Human Interaction Laboratory, an associate professor in the Department of Communication at Stanford, and a senior fellow at Stanford’s Woods Institute for the Environment. Dr. Hughes is founding director of the University of Central Florida’s Synthetic Reality Laboratory and a professor in the Department of Electrical Engineering and Computer Science. Pete Muller is president of the Potomac Training Corporation and provides contractor support to the Office of Naval Research. Dr. Squire is a program manager in Expeditionary Maneuver Warfare and Combatting Terrorism Department at the Office of Naval Research.