Realism in Research: How Can We Build Stronger Connections to Naval Operations?

Photo by Lance Cpl. Angel D. Travis

HOW DO YOU MAKE RESEARCH MORE REAL, AND RELEVANT? BRING THE LABORATORY INTO THE FIELD–OR MAKE THE FIELD YOUR LABORATORY

By Lt. Adam T. Biggs, MSC, USN

Naval science may encompass a broad range of topics, including issues on sea, air, and land, but these projects have one goal in common—improve operations within the Department of the Navy. This distinction makes naval research organizations, especially the Office of Naval Research, different from other funding organizations. The goal is not to advance our understanding of the universe or increase our general knowledge about a particular topic, but to support the mission of the Navy: “to maintain, train, and equip combat-ready Naval forces capable of winning wars, deterring aggression, and maintaining freedom of the seas.”

Emphasizing this mission is not hyperbole—there is a real cost to pursuing naval science and technology. Namely, every tax dollar spent on a research project is a dollar that did not go directly to supporting a Sailor and Marine. Rather than spending funds to maintain an aircraft or send supplies to ships at sea, that dollar (and many more) went to a laboratory somewhere in support of naval innovation. Research is a force multiplier, and each dollar invested in research can and should yield a significant return on investment in the form of enhanced operational capabilities or improved quality of life. The value of research is real, but so is the cost. We owe it to every Sailor and Marine to ensure these resources are well utilized in support of the fleet and force.

These responsibilities then bring up one of the greatest hurdles in naval research: How do we ensure the tasks conducted during studies sufficiently replicate actual naval challenges? The objective of research is translating findings into operational effects, and when the links between the laboratory and the fleet are not clear—just as when a conversation is mistranslated—the message gets garbled and could become meaningless. Our responsibility as naval researchers is to ensure the smoothest possible translations from the laboratory to the field as much as it is to produce high-quality, reliable research conclusions.

The current discussion will focus on the role of realism in research by highlighting several methods to ensure that research findings are suitable for widespread fleet use. Most of the discussion will center on an investigation into lethal force decision-making funded by the expeditionary maneuver warfare and combatting terrorism department of the Office of Naval Research under program manager Peter Squire’s direction, although other examples will be drawn from ongoing studies at the Naval Medical Research Unit Dayton. In truth, the examples could be derived from topics as varied as physiological episodes to undersea medicine, yet the translation issues will likely remain comparable throughout. The goal here is to provide a better understanding of three challenges to translating naval research into operational impact as well as several methods to overcome these challenges.

Making the Tasks Real
The first challenge is one that scientists will probably recognize the easiest—how do you make sure the experiment suitably tests the hypothesis? This experimental design step requires selecting the right tasks and equipment to evaluate a given hypothesis. Merely using a new task or device can revolutionize a field, particularly if it is the right task. Although devices represent a potent opportunity to leap the science forward by collecting information in a new way, altering the task can yield an equally large benefit with much less investment. Significant translational hurdles can be overcome when the right tasks are used in the evaluation if the experimental tasks being assessed are also the tasks that operators must perform in the field.

Therein lies the real challenge when picking experimental tasks: How do you know what the best tasks are for evaluation? For example, let’s assume we want to examine lethal force decision-making within an operational scenario such as room clearing in an urban setting. The task could be very realistic if it involves putting up pictures of hostiles onto bullet traps, gearing up a squad of Marines, and having them kick in one door at a time to decide whether or not they should fire on that target. The scenario gets high marks for realism, especially if the Marines are using live ammunition. However, there are a host of experimental problems with this one particular task:

• Every Marine following the first one through will likely notice bullet holes if the target is hostile, which provides extremely influential contextual cues. Participants may not be making lethal force assessments, but instead merely looking for bullet holes.

• The target image on the bullet trap may not realistically depict the size of a human target. Participants may fire on the hostile, but they are not aiming at the areas where they should aim because the physical size of the target is inaccurate for that distance.

• Specific images may have too much of an effect on the results. Some images may seem more hostile, or at least more obviously hostile based on the image’s expressed emotion or position of the gun, and any conclusions about lethal force decision-making might not be based on any stimuli but on these stimuli specifically. The results may then be too narrow to be useful.

Other issues could include the trial count being too small to produce reliable information, peripheral stimuli in the room having a latent effect on results, or even the lighting conditions having an undue influence. The list continues, but the point becomes clear. Sometimes the most realistic example is not producing the most reliable experimental design or actionable results. Sometimes more artificial tasks have to be included for specific assessments.

One example of this would be how our lethal force decision-making research uses classic go/no-go tasks out of cognitive psychology. In a go/no-go task, stimuli could be as simple as one square appearing on-screen that is colored either blue or orange. Participants are told to hit the space bar for blue, and don’t hit anything for orange (or vice versa). The goal is to assess and quantify specific aspects of inhibitory control abilities in a more precise and controlled way than could be done when kicking in a door and shooting (or not shooting) at a target. Simple stimuli allow for control of the decision-making components of the response time, which allows the scientists to differentiate between individual inhibitory control abilities and the process involved when making the decision.

The key translational issue is then finding the right task for the situation. Well-controlled, computer-based experiments have their place when used at the right time and for the right purpose. When quantifying cognitive abilities, for example, there is no good way to get a reliable assessment of attentional capacity or working memory without going back to controlled experiments on computer—not yet anyway. There needs to be caution, however, when extrapolating from a computer-based task to the room-clearing scenario. There is no equivalent keyboard button to mimic the motor mechanics involved when kicking in the door, and that step could dramatically affect cognitive biases and expectations in ways the computer task might not predict—not to mention the anxiety someone will feel when kicking in a door and knowing there could be someone on the other side waiting to shoot you.

There are two good solutions to this conundrum. First, only use the controlled tasks when looking at a very specific purpose. Want to examine reaction speed when pilots are incapacitated due to hypoxia? You could have them trying to initiate emergency procedures, but maybe an assessment of speed and motor control is more important for a hypothesis, which requires more experimental finesse than having people pull green rings while wearing a mask. Limit the controlled tasks to specific hypotheses, but incorporate realistic procedures (e.g., initiate the emergency procedure by pulling a ring rather than slapping a space bar on a keyboard) whenever possible. There could very well be a difference in how hypoxic aviators reach for rings versus press buttons that affects the conclusion. Second, if you are having trouble thinking of a good method, find an operator from the community of interest and bring them into the room. Too many scientific designs suffer from a lack of real-world expertise among the scientists. Room clearing procedures are not a typical part of a cognitive psychology Ph.D. even if attention and decision-making studies are. Still, if you find a few Marines with close quarters combat experience, they can help fill in the blanks. ONR can—and should—be in the business of facilitating these relationships. The combined expertise of the scientist and the operator working together is likely to yield the best design for widespread operational impact.

Finding the right population to test is important. Using college students for university-based research targets the right demographic for most military populations. Photo by MC1 Amanda S. Kitchner

Finding the Real Population
The next challenge seems a bit more obvious, although it can be a significant issue when translating university research to an operational setting. Specifically, military populations may have or require specialized training that is difficult to find elsewhere. This caveat in turn makes the individuals participating in the research especially important. For example, if the concern is cognitive processing speed of individuals aged 18-22 years, then either new service members or college students could suffice as a test population. There will likely be other factors to consider, especially education, although the base translation remains accurate. If the study is investigating lethal force decision-making, however, there are likely to be many critical differences between these two populations. College students may never have fired a weapon, whereas military service members are much more likely to have had firearms training. Any study involving a weapon must address this fundamental difference in training.

So, the critical issue is whether the population being tested during the experiment represents a comparable group to the military operators for which the study is intended. There are a few approaches here that could be beneficial. First, use a readily available population (e.g., college students for university-based research) for any study that does not require specialized training. This approach sounds logical, although many military communities require so many different forms of training that this approach would likely prove to be a significant hindrance to any translational value. Such easy access populations, however, could be useful in piloting experiments or refining procedures. Service member time and access tends to be more limited, and preparing for those critical data collections is a prudent step.

Two additional methods raise additional issues. The first method involves turning an easy access population into your trained population. This approach is not feasible for special forces training, but for some more simpler trainings it could be a viable method. For example, most college students will probably have no idea how to perform a “FOD” (foreign object damage) walk, which are standard preparation in aviation to ensure landing areas are free from debris that could cause damage to aircraft. Training for this task is largely visual search, and some very simple training could thus turn a laboratory search for target “Ts” among distractor “Ls” into something with far greater operational relevance. The second method is difficult, although not technically complicated—test the population intended to benefit from the research. If the study is supposed to benefit pilots, then test pilots. If the study is supposed to benefit special forces, then test special forces. The challenge is access, but the value is immense. Many translational issues disappear when the population from which you sample is the group who would benefit from your research.

Making It Feel Real
Human performance research, like all psychology research, must be carefully controlled to avoid producing a misleading result. For example, participants might change their behavior simply because they know they are being observed—something known as the Hawthorne Effect. Military research only amplifies this existing challenge by requiring the findings be applied to some of the most difficult, dangerous, and stressful scenarios known to mankind. So, the perennial challenge becomes whether the research team can make a laboratory setting feel like the real deal.

Lethal force decision-making is an especially difficult area in this regard. With hypoxia, and many other physiological phenomena, the way to make these tasks feel real is to safely induce the corresponding symptoms in the laboratory. Fatigue and sleep deprivation have similar solutions where participants are safely and carefully fatigued under the watchful eye of experimenters. For combat conditions, however, there is no way to truly replicate the anxiety and importance of making lethal force decisions. In these cases, the research challenge comes in isolating key parts of the behavior so that certain elements can be as realistic as ethically possible.

The anxiety in particular is a key concern with an interesting solution. Specifically, simulated ammunition allows operators to use their service-issued weapons during training. Specialized equipment is inserted into the pistol or rifle, and the weapon can now fire simulated rounds that are effectively lightweight ammunition with a paint or wax tip. They are still fired out of real weapons, and their impact creates a strong physiological sensation that allows people to safely train without the lethal consequences. Still, there is no mistaking the simulated round for anything else—you know you have been shot. In turn, the anxiety created serves as the most realistic corollary we have when performing force-on-force training. These simulated rounds are already in widespread use among military and law enforcement, although comprehensive medical evidence does not exist regarding the superficial ballistic trauma inflicted by these rounds. It is a critical part of the matter we are exploring from a medical angle as well as a human performance angle, which also brings the discussion back to the main point. That is, there may be a way to induce some of these effects and make it feel realistic, but we as researchers always have a responsibility to ensure the safety of our participants in addition to ensuring the translational value of our research.

Summary
These three challenges—making the task real, finding the real population, and making it feel real—are only a few examples of the many challenges faced in transitioning research and development to operational use. Lethal force decision-making has various components more relevant to this topic than other research areas, such as how best to use virtual environments for training, yet the challenges are by no means unique to this issue. As such, the solutions discussed here provide a reasonable overview of the available opportunities and the importance of making any transition as seamless as possible.

If there is to be one ultimate takeaway from this discussion, here it is: Naval science and technology is unlike other forms of research—our studies have an end goal as well as end users whose lives may depend on our findings. Every dollar we spend on research is a dollar not spent on a Sailor or Marine directly, which is why we must ensure that the products and conclusions of our research apply outside the laboratory.

About the author:
Lt. Biggs is a research psychologist at the Naval Medical Research Unit Dayton.