Share Email Print

Proceedings Paper

Visualization central to sensor fusion in security systems
Author(s): C. Gaertner
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

When human lives are involved it is vital that the man-machine interface be as informative and accurate as possible. Multiple but non-integrated sensors such as those consisting of two dimensional displays that attempt to depict the contents of packages simply do not transfer sufficient data to security personnel. In questionable circumstances officials are forced to use their sixth sense to determine whether or not a particular package or individual should be detained for further investigation. This intuitive process is fortunately adequate enough in most situations but is dependent on a particular individual's level of training, experience, emotional and physical state. The key to airport security success is to understand how an experienced official charged with safeguarding a particular area fuses the data that is presented to him or her. Once this is done, the cognitive process could be significantly automated and the shortcomings associated with the human element could be substantially eliminated. Simply fusing the output of multiple sensors into a central system and then applying an algorithm does not solve the problem. The speed and accuracy of current sensor fusion and A! techniques lag significantly behind what is available in the human mind. That is not to say the technology is not available. It is the appropriate application of it that has not yet been determined. An approach that would first identify which cues (visual and audible) are most important and useful to security personnel is essential. One answer incorporates a head mounted display (HMID), preferably capable of displaying three dimensional graphics, that is worn by personnel charged with protecting a particular port of entry. Depicted in the wide angle HMD would be all available information from standard as well as test sensors. Data could be displayed in multiple formats using a wide array of presentations to provide the maximum amount of information for both the officer and the researcher. To limit clutter the unit would incorporate two features. The first would be scaling, so that the data could be dynamically modified by a particular user. For instance, television cameras that pan an area may not be considered as important as the X-ray images so they could be made smaller. On the other hand if a particular individual wants to zoom in on a camera image to get a closer look at a suspect, this "window" would be increased. Secondly, a head tracker could be incorporated so that the display appears to be continuous. As the users turn their heads the image continues. This feature would be useful in instances where supporting sensor information was desired because of a cue that was shown in a previous sensor "window". Configuration tests must first be conducted using this system. Preliminaiy studies would show what infonnation to include and what else would be desirable. Central to this concept is the notion that the end user, the security official, is intimately involved in the development loop. Once it is determined what information is used as well as how it is used, the automation of the cognitive process could commence yielding an efficient, automated and fully integrated sensor system.

Paper Details

Date Published: 1 February 1994
PDF: 10 pages
Proc. SPIE 2093, Substance Identification Analytics, (1 February 1994); doi: 10.1117/12.172535
Show Author Affiliations
C. Gaertner, GEC Marconi Avionics Inc. (United States)

Published in SPIE Proceedings Vol. 2093:
Substance Identification Analytics
James L. Flanagan; Richard J. Mammone; Albert E. Brandenstein; Edward Roy Pike M.D.; Stelios C. A. Thomopoulos; Marie-Paule Boyer; H. K. Huang; Osman M. Ratib, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?