Share Email Print
cover

Proceedings Paper

Occluded object reconstruction for first responders with augmented reality glasses using conditional generative adversarial networks
Author(s): Kyongsik Yun; Thomas Lu; Edward Chow
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Firefighters suffer a variety of life-threatening risks, including line-of-duty deaths, injuries, and exposures to hazardous substances. Support for reducing these risks is important. We built a partially occluded object reconstruction method on augmented reality glasses for first responders. We used a deep learning based on conditional generative adversarial networks to train associations between the various images of flammable and hazardous objects and their partially occluded counterparts. Our system then reconstructed an image of a new flammable object. Finally, the reconstructed image was superimposed on the input image to provide "transparency". The system imitates human learning about the laws of physics through experience by learning the shape of flammable objects and the flame characteristics.

Paper Details

Date Published: 30 April 2018
PDF: 7 pages
Proc. SPIE 10649, Pattern Recognition and Tracking XXIX, 106490T (30 April 2018); doi: 10.1117/12.2305151
Show Author Affiliations
Kyongsik Yun, Jet Propulsion Lab. (United States)
Thomas Lu, Jet Propulsion Lab. (United States)
Edward Chow, Jet Propulsion Lab. (United States)


Published in SPIE Proceedings Vol. 10649:
Pattern Recognition and Tracking XXIX
Mohammad S. Alam, Editor(s)

© SPIE. Terms of Use
Back to Top