Marketplace: See exhibitor listings
View demonstrations
>
Conference 11759

Virtual, Augmented, and Mixed Reality (XR) Technology for Multi-Domain Operations II

Digital Forum: On-demand now
View Session ∨
  • All-Symposium Plenary Session I
  • All-Symposium Plenary Session II
  • All-Symposium Plenary Session III
  • Next Generation Sensor Systems and Applications Track Plenary Session
  • Welcome and Introduction
  • DVE Sensing
  • Human Performance
  • Information Presentation
  • Systems
  • Poster Session
  • Front Matter: Volume 11759
2021-04-16T11:46:50-07:00
LIVE NOW:
-
UPCOMING LIVE EVENTS:
Session LIVE: All-Symposium Plenary Session I
11741-402
Author(s): Timothy P. Grayson, Defense Advanced Research Projects Agency (United States)
On-demand | Presented Live 12 April 2021
This event occurred in the past
Show Abstract + Hide Abstract
Defense Advanced Research Projects Agency (DARPA) is developing the technologies to conduct Mosaic Warfare. These are the tools and infrastructure to enable dynamic composition and operation of adaptive, disaggregated systems of systems architectures. When applied to sensing, the tools of Mosaic enable sensing to be conducted as a “team sport” in which we can move away from expensive, complex, exquisite, multi-function monolithic sensors to highly distributed, hyper-specialized sensors in which each individual sensor addresses only a small part of an overall function. This specialization enables deployment of sensors in greater numbers and smaller, cheaper platforms. The presentation will discuss how DARPA is implementing Mosaic, the implications for sensing, and potential dual-use applications in the commercial sector.
Session LIVE: All-Symposium Plenary Session II
11741-403
Author(s): Donald A. Reago, CCDC C5ISR (United States)
On-demand | Presented Live 12 April 2021
This event occurred in the past
Show Abstract + Hide Abstract
This talk will examine the functions of sensing (and sensors) in modern warfare in relation to increasing complexities regarding asymmetric threats, multi-domain operations, and command and control layers and their demands on information processing, situational awareness and networking. New sensors and SWAP-C improvements to existing sensors are increasing numbers and availability of collectors that produce more and more data, while, simultaneously, demand/consumption is also increasing with more users wanting more and more situational awareness. New bottlenecks become evident and drive new requirements needing new solutions for increased data processing, automation and intelligent processing.
Session LIVE: All-Symposium Plenary Session III
11741-401
Author(s): Rita C. Flaherty, Lockheed Martin Corp. (United States)
On-demand | Presented Live 13 April 2021
This event occurred in the past
Show Abstract + Hide Abstract
Pandemic and budget related impacts to the global supply chain have driven a change in the way we approach partnerships with small businesses and the manpower they provide. This presentation will focus on the importance of cultivating productive and mutually beneficial relationships with suppliers while simultaneously driving economic development in local communities. The talk will also cover the challenge of virtual recruiting, encouraging diversity in the workforce while attracting local talent and the avenues for small business to connect with Lockheed Martin.
11741-400
Author(s): Jean-Charles Lede, Air Force Research Lab. (United States)
On-demand | Presented Live 13 April 2021
This event occurred in the past
Show Abstract + Hide Abstract
Autonomy and AI have made tremendous progress in recent years to the point where operational applications of these technologies can provide decisive advantages. This presentation will discuss the approach recommended to rapidly field autonomy and AI capabilities at scale including the development of a common platform, addressing trust issues, and agile methodology. Examples in sensor exploitation and business processes will be used to demonstrate the operational value of current generation of AI. However, this generation has limitations, and the talk will conclude with future research required to expand the safe, ethical, and effective use of these technologies.
Session LIVE: Next Generation Sensor Systems and Applications Track Plenary Session
11746-300
Author(s): Stuart H. Young, U.S. Army Combat Capabilities Development Command (United States)
On-demand | Presented Live 14 April 2021
This event occurred in the past
Show Abstract + Hide Abstract
The self-driving car industry has made great autonomy advances, but mostly for well-structured and highly predictable environments. In complex militarily-relevant settings, robotic vehicles have not demonstrated operationally relevant speed and aren’t autonomously reliable. While vehicle platforms that can handle difficult terrain exist, their autonomy algorithms and software often can’t process and respond to changing situations well enough to maintain necessary speeds and keep up with soldiers on a mission. DARPA’s Robotic Autonomy in Complex Environments with Resiliency (RACER) program aims to make sure algorithms aren’t the limiting part of the system and that autonomous combat vehicles can meet or exceed soldier driving abilities. RACER will demonstrate game-changing autonomous UGV mobility, focused on speed and resiliency, using a combination of simulation and advanced platforms. It tests algorithms in the field at DARPA-hosted experiments across the country on a variety of terrain. The program provides UGV platforms that research teams can use to develop autonomous software capabilities through repeated cycles of tests on unstructured off-road landscapes. Goals include not only autonomy algorithms, but also creation of simulation-based approaches and environments that will support rapid advancement of self-driving capabilities for future UGVs.
Welcome and Introduction
11759-800
Author(s): Mark S. Dennison, CCDC Army Research Lab. (United States); David M. Krum, California State Univ., Los Angeles (United States); John N. Sanders-Reed, Image & Video Exploitation Technologies, LLC (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Organized by experts from across academia, industry, the Federal Labs, and SPIE, this meeting will highlight emerging capabilities in immersive technologies and degraded visual environments as critical enablers to future multi-domain operations.
DVE Sensing
11759-1
Author(s): Duncan L. Hickman, Tektonex Ltd. (United Kingdom)
Digital Forum: On-demand
Show Abstract + Hide Abstract
A design approach is presented which maximises the signal to clutter ratio using broadband spectral and polarisation information in conjunction with simple and robust image and data processing. It is shown that this methodology offers a basis for introducing ATD/R functionality into lower cost imaging systems such as those flown on drones. The concept is demonstrated using two broadband cameras operating in the visible and near infrared, with one of the sensors providing polarimetric information. A joint spectral polarisation weight map is proposed, and the potential performance gain is demonstrated for moderate to high clutter situations.
11759-2
Author(s): Marco Ciarambino, Yung-Yu Chen, Volocopter GmbH (Germany); Niklas Peinecke, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Game engine technology is widely used for development and testing of unmanned aircraft and urban air mobility vehicles. However, most existing solutions provide a visual simulation only, lacking the sensors needed to operate the vehicles in an automated environment, including millimeter wave radar. This paper presents an implementation of a millimeter wave radar simulation for a game engine. The engine used is the AirSim plugin in the Unreal Engine. To obtain the radar response, we take advantage of Unreal Engine and AirSim rendering outputs of surfaces normal components, semantic segmentation of various objects in the scene, depth distance from the camera. In particular, we calculate the radar cross section for each object present in the scene separately, being thus able to have different material characteristics for different entities. Future works to improve the usability and the performance of the simulator are presented.
11759-4
Author(s): Brian Z. Bentz, John D. van der Laan, Andrew Glen, Christian A. Pattyn, Brian J. Redman, Andres L. Sanchez, Karl R. Westlake, Sandia National Labs. (United States); Ryan L. Hastings, Kevin J. Webb, Purdue Univ. (United States); Jeremy B. Wright, Sandia National Labs. (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Degraded visual environments like fog pose a major challenge to safety and security because light is scattered by tiny particles. We show that by interpreting the scattered light it is possible to detect, localize, and characterize objects normally hidden in fog. First, a computationally efficient light transport model is presented that accounts for the light reflected and blocked by an opaque object. Then, statistical detection is demonstrated for a specified false alarm rate using the Neyman-Pearson lemma. Finally, object localization and characterization are implemented using the maximum likelihood estimate. These capabilities are being tested at the Sandia National Laboratory Fog Chamber Facility. Sandia National Laboratories is a multimission laboratory managed and operated by National Technology and Engineering Solutions of Sandia LLC, a wholly owned subsidiary of Honeywell International Inc., for the U.S. Department of Energy National Nuclear Security Administration
Human Performance
11759-5
Author(s): Kharananda Sharma, U.S. Army Aeromedical Research Lab. (United States); Jon F. Vogl, Thomas H. Harding, U.S. Army Aeromedical Research Lab. (United States), Goldbelt Frontier, LLC, 5500 Cherokee, Ave # 100 (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Army Aviation mission scenarios have inherent risks associated with them and no risks appear greater than degraded visual environment (DVE) flight conditions. Approximately 25% of Army rotary wing accidents and approximately 50% of fatalities are due to DVE. As part of the Aviator Risk Assessment Model (AvRAM), DVE obscurants (i.e.,rain, fog, dust, smoke, and snow) have been modeled using published sensor penetration data for visible, infrared, and lower frequency bands. Visibility calculations as well as time and distance calculations for target detection were completed for several sensor configurations and sensitivities, environmental conditions, and airspeed.
11759-6
Author(s): Frank L. Kooi, Alexander Toet, Maarten A. Hogervorst, Piet Bijl, TNO (Netherlands)
Digital Forum: On-demand
Show Abstract + Hide Abstract
While AR has successfully been deployed in the military air domain for decades, its use in the ground domain poses serious challenges. Some of these challenges result from technological limitations. However, others are more difficult or even impossible to resolve since they reflect fundamental human characteristics. The toughest-to-solve limitations are caused by our physiology, anatomy, and cognition. Eye physiology limitations are masking, contrast, and occlusion. The anatomical shape of the human head forces optics to be mounted in front of the glare-protecting eye sockets. The problems of the brain with respect to AR are i) it’s not ‘build’ to perceive transparency, ii) its limited cognitive capacity, and iii) we intuitively use a world-referenced system. We provide an in-depth analysis of these human factor limitations. Conclusion: Fundamental human limitations seriously constrain see-through AR systems for the infantry and should be considered in their design and deployment.
11759-7
Author(s): Will Irvin, Claire Goldie, Christopher O'Brien, Christopher J. Aura, Leonard A. Temme, Michael Wilson, U.S. Army Aeromedical Research Lab. (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
To ensure safe mission completion, Army aviators must be prepared to execute appropriate emergency procedures (EPs) under a range of conditions. Virtual Reality (VR) technologies provide novel opportunities to study the factors that determine the quality of EP execution in a safe environment that approximates the real world. USAARL is developing an EP research simulator within the Unity Real-Time Development Platform that is currently instantiated with HTC Vive Pro hardware. Pupil Labs hardware and software has been integrated with the HTC Vive Pro. To date, engine fire and a single-engine failure have been implemented in the USAARL EP research simulator.
11759-8
Author(s): Jordan J. Garner, Michael D'Zmura, Univ. of California, Irvine (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Virtual Reality (VR) technology lets users train for high-stakes situations in the safety of a virtual environment (VE). Yet user movement through such an environment can cause postural instability and motion sickness. These issues are often attributed to how the brain processes visual self-motion information in VEs. Low-contrast conditions, like those caused by dense fog, are known to affect observers’ self-motion perception, but it is not clear how posture, motion sickness, and navigation performance are affected by this kind of visual environment degradation. Ongoing work using VR focuses on three aspects of this problem. First, we verify the effects in VR of low contrast on visual speed estimates. Second, we test how contrast reduction affects posture control, motion sickness, and performance during a VR navigation task. Third, we examine whether it is useful to augment low-contrast conditions with high-contrast visual aids in the environment.
11759-9
Author(s): Bonnie Posselt, Royal Air Force (United Kingdom), Air Force Research Lab. (United States), Univ. of Birmingham (United Kingdom); Eric Seemiller, KBR (United States); Eric Palmer, Geno Imel, KBR, Inc. (United States); Marc Winterbottom, Steve Hadley, Air Force Research Lab. (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Helmet Mounted Displays (HMDs) are becoming a critical part of the aircraft system. Information displayed in an HMD can be presented binocularly to the aviator, with stereoscopic depth. This could improve performance, but may be linked to the stereo acuity of the user. This research investigates performance flying in a simulator wearing a SA Photonics SA-62 HMD displaying warning alerts with various degrees of disparity, giving the perception of stereo 3D. Results may inform stereo acuity vision standards in military aviators as well as HMD design requirements.
Information Presentation
11759-11
Author(s): Patrick Fowler, U.S. Army Combat Capabilities Development Command, C5ISR Ctr. (United States); Michael Smith, CACI International Inc. (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
This paper studies head motion profiles from twenty-seven individual Warfighters conducting operationally relevant scenarios in order to better understand the phenomenology of head motion in high-intensity environments. This work will improve the design of future combat systems. As the military transitions from analog to digital technology, the scientific community is being confronted with new dynamic parameters of system performance, specifically: frame rate, refresh rate, and latency. For helmet-mounted visual augmentation systems (VAS), the impact of these parameters is most evident during head movement. The source data is collected by using small inertial measurement unit (IMU) data loggers affixed to Warfighters’ helmets in order to collect the Warfighter’s observation vector. This data is analyzed to determine unique characteristics of those head movements in order to aid in requirements generation for digital VAS, including mixed reality (MR).
11759-13
Author(s): Christian Walko, Malte-Jörn Maibach, Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany)
Digital Forum: On-demand
Show Abstract + Hide Abstract
This paper describes the flight testing and the integration process of the Microsoft HoloLens 2 as head-mounted display with DLR's research helicopter FHS. In previous work, the HoloLens was integrated into a helicopter simulator. Now, while migrating the HoloLens into a real helicopter, the main challenge was the head tracking of the HoloLens, because it is not designed to operate on moving vehicles. Tracking errors due to vehicle acceleration are reduced with a system identification approach. In addition, an external tracker supports against drift. A Kalman filter with a nonlinear weighting is used for the fusion. The calibration is automated, using an optimization approach. All pre-tests were carried out in a car. Flight tests have shown that the overall quality of this head-mounted display solution is very good, without jitter or latency. With the provided methods, the HoloLens 2 can be used very well in such a rapid development pipeline to research augmented reality assistance.
Systems
11759-15
Author(s): Paul L. Wisely, Aardvark Aerospace Ltd. (United Kingdom)
Digital Forum: On-demand
Show Abstract + Hide Abstract
We have designed a visual flight guidance system that enables manual control of aircraft operations in degraded visual environments down to Cat III, both for take-off and landing. This has been achieved by means of visual guidance cues displayed on a head up display (HUD) and whereas this is not in itself novel, our development methods and approach to verifying its operation we believe is. In order to certify the system as airworthy, compliance with the relevant airworthiness standards defined in 14 CFR part 21 and other related guidance material, needs to be demonstrated. The challenge in this case was to harmonize existing flight guidance algorithms with a faithful aero model of a new airframe and to demonstrate their effectiveness in a manner that was economically viable. To achieve this, we constructed a Digital Simulation and Evaluation Environment (DSVE) that hosted a Digital Twin (DT) of the system such that its operation in real conditions could be accurately predicted.
11759-17
Author(s): Christopher Reardon, Univ. of Denver (United States); Jason M. Gregory, Carlos P. Nieto-Granda, John G. Rogers, U.S. Army Research Lab. (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Robots, equipped with powerful modern sensors and perception algorithms, have enormous potential to use what they perceive to provide enhanced situational awareness to their human teammates. One such type of information is changes that the robot detects in the environment that have occurred since a previous observation. A major challenge for sharing this information from the robot to the human is the interface. This includes how to properly aggregate change detection data, present it succinctly for the human to interpret, and allow the human to interact with the detected changes, e.g., to label, discard, or even to task the robot to investigate. In this work we address this challenge through the design of a mixed reality interface for displaying and interacting with changes detected by an autonomous robot teammate. We believe the outcomes of this work have significant applications to Soldiers interacting with any type of high-volume, autonomously generated information in MDO.
11759-18
Author(s): Brian Reily, Colorado School of Mines (United States); Christopher Reardon, Univ. of Denver (United States); Hao Zhang, Colorado School of Mines (United States)
Digital Forum: On-demand
11759-20
Author(s): Theron T. Trout, Stormfish Scientific Corp. (United States); Erik Risinger, Univ. of Massachusetts Amherst (United States); Tiffany Raber, Mark S. Dennison, U.S. Army Combat Capabilities Development Command (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
Efforts are underway across the defense and commercial industries to develop cross-reality (XR), multi-user operation centers in which human users can perform their work while aided by intelligent systems. At their core is the objective to accelerate decision-making and improve efficiency and accuracy. However, presenting data to users in an XR, multi-dimensional environment results in a dramatic increase in extraneous information density. Intelligent systems offer a potential mechanism for mitigating information overload while ensuring that critical and anomalous data is brought to the attention of the human users in an immersive interface. This paper describes such a prototype system that combines real and synthetic motion sensors which, upon detection of an event, send a captured image for processing by a YOLO cluster. Finally, we describe how a future system could integrate a decision-making component to determine which data to present to human users in an XR environment.
Poster Session
11759-21
Author(s): Joshua H. Girard, Dylan C. Burns, Dryver R. Huston, Tian Xia, The Univ. of Vermont (United States)
Digital Forum: On-demand
Show Abstract + Hide Abstract
This research combines penetrating radar with 3D computer vision for real-time augmented reality enabled target sensing. Positioning systems for smaller radar systems are inaccurate, non-portable, and challenged by poor GPS signals. Adding computer vision to penetrating radar technology expands the 2D imaging plane to 6 degrees of freedom. The radar scan is a vector with length equivalent to depth from the transmit and receive antennae, these combined technologies generate an accurate 3D model of the internal structure of any material radar can penetrate. The computer vision device can also be used for an augmented reality system, which has applications in threat detection as well as civil buried object detection. The project goal is to create a data registration pipeline and visually display the data in a 3D environment using localization from a computer vision tracking device.
Front Matter: Volume 11759
Conference Chair
U.S. Army Research Lab. (United States)
Conference Chair
California State Univ., Los Angeles (United States)
Conference Chair
Image & Video Exploitation Technologies, LLC (United States)
Conference Co-Chair
NASA Langley Research Ctr. (United States)
Program Committee
Northrop Grumman Corp. (United States)
Program Committee
SA Photonics, Inc. (United States)
Program Committee
U.S. Army Research Lab. (United States)
Program Committee
Michael D'Zmura
Univ. of California, Irvine (United States)
Program Committee
U.S. Army Research Lab. (United States)
Program Committee
U.S. Army Research Lab. (United States)
Program Committee
Western Carolina Univ. (United States)
Program Committee
NanoQuantum Sciences, Inc. (United States)
Program Committee
Michael N. Geuss
U.S. Army Research Lab. (United States)
Program Committee
MOVES Institute at the Naval Postgraduate School (United States)
Program Committee
Sandia National Labs. (United States)
Program Committee
U.S. Army Research Lab. (United States)
Program Committee
Naval Information Warfare Ctr. Pacific (United States)
Program Committee
Thales Visionix, Inc. (United States)
Program Committee
Deutsches Zentrum für Luft- und Raumfahrt e.V. (Germany)
Program Committee
Kim Pollard
U.S. Army Research Lab. (United States)
Program Committee
CCDC Data & Analysis Ctr. (United States)
Program Committee
Adrienne J. Raglin
U.S. Army Research Lab. (United States)
Program Committee
Univ. of Denver (United States)
Program Committee
U.S. Army Research Lab. (United States)
Program Committee
Evan Suma Rosenberg
Univ. of Minnesota, Twin Cities (United States)
Program Committee
Stormfish Scientific Corp. (United States)
Program Committee
Suya You
U.S. Army Research Lab. (United States)