Show all abstracts
View Session
- Front Matter: Volume 6971
- Signal/Image Processing for Tracking
- Hardware Implementation
- System Applications
- Poster Session
Front Matter: Volume 6971
Front Matter: Volume 6971
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6971, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing.
Signal/Image Processing for Tracking
A hardware neural network for target tracking
Show abstract
The Zero Instruction Set Computer (ZISC) is an integrated circuit devised by IBM to realize a restricted Coulomb
energy neural network. In our application, it functions as a parallel computer that calculates the correlation
coefficients between an input pattern and patterns stored in its neurons. We explored the possibility of using the
ZISC in a target tracking system by devising algorithms to take advantage of the ZISC's parallelism and testing
them on real video sequences. Our experiments indicate that the ZISC does improve appreciably the computing
time compared to a sequential version of the algorithm.
A unified framework for capturing facial images in video surveillance systems using cooperative camera system
Show abstract
Low resolution and un-sharp facial images are always captured from surveillance videos because of long human-camera
distance and human movements. Previous works addressed this problem by using an active camera to capture close-up
facial images without considering human movements and mechanical delays of the active camera. In this paper, we
proposed a unified framework to capture facial images in video surveillance systems by using one static and active
camera in a cooperative manner. Human faces are first located by a skin-color based real-time face detection algorithm.
A stereo camera model is also employed to approximate human face location and his/her velocity with respect to the
active camera. Given the mechanical delays of the active camera, the position of a target face with a given delay can be
estimated using a Human-Camera Synchronization Model. By controlling the active camera with corresponding amount
of pan, tilt, and zoom, a clear close-up facial image of a moving human can be captured then. We built the proposed
system in an 8.4-meter indoor corridor. Results show that the proposed stereo camera configuration can locate faces with
average error of 3%. In addition, it is capable of capturing facial images of a walking human clearly in first instance in
90% of the test cases.
Multisensor 3D tracking for counter small unmanned air vehicles (CSUAV)
Show abstract
A variety of unmanned air vehicles (UAVs) have been developed for both military and civilian use. The typical large
UAV is typically state owned, whereas small UAVs (SUAVs) may be in the form of remote controlled aircraft that are
widely available. The potential threat of these SUAVs to both the military and civilian populace has led to research
efforts to counter these assets via track, ID, and attack. Difficulties arise from the small size and low radar cross section
when attempting to detect and track these targets with a single sensor such as radar or video cameras. In addition, clutter
objects make accurate ID difficult without very high resolution data, leading to the use of an acoustic array to support
this function. This paper presents a multi-sensor architecture that exploits sensor modes including EO/IR cameras, an
acoustic array, and future inclusion of a radar. A sensor resource management concept is presented along with
preliminary results from three of the sensors.
A robust real-time object detection and tracking system
Show abstract
We propose a real-time vehicle detection and tracking system from an electro-optical (EO)
surveillance camera. Real time object detection remains a challenging computer vision problem
in uncontrolled environments. The state-of-the-art adaboosting technique is used to serve as a
robust object detector. In addition to the generally-used Haar features, we propose to include
corner features to improve the detection performance of the vehicles. Having the objects of
interest detected, we use the detection results to initialize the object tracking module. We propose
an advanced, adaptive particle-filtering based algorithm to robustly track multiple mobile targets
by adaptively changing the appearance model of the selected targets. We use the affine
transformation to describe the motion of the object across frames. By drawing multiple particles
on the transformation parameters, our approach provides high performance while facilitating
implementation of this algorithm in hardware with parallel processing. In order to resume from
the lost track case, which may result from the objects' out of boundary or being occluded, we
utilize the prior information (height-to-width ratio) and the temporal information of the objects to
estimate if the tracking is reliable. Object detectors will be evoked at the frames which fail in
tracking the objects reliably. We also check for occlusion by comparing hue values within the
rectangular region for the current frame with that of the previous frame. Detection is re-initialized
for the next frame if an occlusion is claimed for the current frame. The system works very well in
terms of speed and performance for the real surveillance video.
Simulation of video sequences for an accurate evaluation of tracking algorithms on complex scenes
Show abstract
As target tracking is arousing more and more interest, the necessity to reliably assess tracking algorithms
in any conditions is becoming essential. The evaluation of such algorithms requires a database of sequences
representative of the whole range of conditions in which the tracking system is likely to operate, together
with its associated ground truth. However, building such a database with real sequences, and collecting the
associated ground truth appears to be hardly possible and very time-consuming.
Therefore, more and more often, synthetic sequences are generated by complex and heavy simulation
platforms to evaluate the performance of tracking algorithms. Some methods have also been proposed using
simple synthetic sequences generated without such complex simulation platforms. These sequences are
generated from a finite number of discriminating parameters, and are statistically representative, as regards
these parameters, of real sequences. They are very simple and not photorealistic, but can be reliably used
for low-level tracking algorithms evaluation in any operating conditions.
The aim of this paper is to assess the reliability of these non-photorealistic synthetic sequences for evaluation
of tracking systems on complex-textured objects, and to show how the number of parameters can be
increased to synthesize more elaborated scenes and deal with more complex dynamics, including occlusions
and three-dimensional deformations.
Hardware Implementation
Application of network control systems for adaptive optics
Show abstract
The communication architecture for most pointing, tracking, and high order adaptive optics control systems has been based
on a centralized point-to-point and bus based approach. With the increased use of larger arrays and multiple sensors,
actuators and processing nodes, these evolving systems require decentralized control, modularity, flexibility redundancy,
integrated diagnostics, dynamic resource allocation, and ease of maintenance to support a wide range of experiments.
Network control systems provide all of these critical functionalities. This paper begins with a quick overview of adaptive
optics as a control system and communication architecture. It then provides an introduction to network control systems,
identifying the key design areas that impact system performance. The paper then discusses the performance test results of a
fielded network control system used to implement an adaptive optics system comprised of: a 10KHz, 32x32 spatial selfreferencing
interferometer wave front sensor, a 705 channel "Tweeter" deformable mirror, a 177 channel "Woofer"
deformable mirror, ten processing nodes, and six data acquisition nodes. The reconstructor algorithm utilized a modulo-2pi
wave front phase measurement and a least-squares phase un-wrapper with branch point correction. The servo control
algorithm is a hybrid of exponential and infinite impulse response controllers, with tweeter-to-woofer saturation offloading.
This system achieved a first-pixel-out to last-mirror-voltage latency of 86 microseconds, with the network accounting for 4
microseconds of the measured latency. Finally, the extensibility of this architecture will be illustrated, by detailing the
integration of a tracking sub-system into the existing network.
Control of a deformable mirror subject to structural disturbance
Show abstract
Future space based deployable telescopes will be subject to non-atmospheric disturbances. Jitter and optical
misalignment on a spacecraft can be caused by mechanical noise of the spacecraft, and settling after maneuvers. The
introduction of optical misalignment and jitter can reduce the performance of an optical system resulting in pointing
error and contributing to higher order aberrations. Adaptive optics can be used to control jitter and higher order
aberrations in an optical system. In this paper, wavefront control methods for the Naval Postgraduate School adaptive
optics testbed are developed. The focus is on removing structural noise from the flexible optical surface using discrete
time proportional integral control with second order filters. Experiments using the adaptive optics testbed successfully
demonstrate wavefront control methods, including a combined iterative feedback and gradient control technique. This
control technique results in a three time improvement in RMS wavefront error over the individual controllers correcting
from a biased mirror position. Second order discrete time notch filters are also used to remove induced low frequency
actuator and sensor noise at 2Hz. Additionally a 2 Hz structural disturbance is simulated on a Micromachined
Membrane Deformable Mirror and removed using discrete time notch filters combined with an iterative closed loop
feedback controller, showing a 36 time improvement in RMS wavefront error over the iterative closed loop feedback
alone.
A unique three-axis gimbal mechanism
Show abstract
Future space based deployable telescopes will be subject to non-atmospheric disturbances. Jitter and optical
misalignment on a spacecraft can be caused by mechanical noise of the spacecraft, and settling after maneuvers. The
introduction of optical misalignment and jitter can reduce the performance of an optical system resulting in pointing
error and contributing to higher order aberrations. Adaptive optics can be used to control jitter and higher order
aberrations in an optical system. In this paper, wavefront control methods for the Naval Postgraduate School adaptive
optics testbed are developed. The focus is on removing structural noise from the flexible optical surface using discrete
time proportional integral control with second order filters. Experiments using the adaptive optics testbed successfully
demonstrate wavefront control methods, including a combined iterative feedback and gradient control technique. This
control technique results in a three time improvement in RMS wavefront error over the individual controllers correcting
from a biased mirror position. Second order discrete time notch filters are also used to remove induced low frequency
actuator and sensor noise at 2Hz. Additionally a 2 Hz structural disturbance is simulated on a Micromachined
Membrane Deformable Mirror and removed using discrete time notch filters combined with an iterative closed loop
feedback controller, showing a 36 time improvement in RMS wavefront error over the iterative closed loop feedback
alone.
Sin/cosine encoder interpolation methods: encoder to digital tracking converters for rate and position loop controllers
Show abstract
Pointing and tracking applications usually require relative gimbal angles to be measured for reporting and controlling the
line-of-sight angular position. Depending on the application, angular resolution and/or accuracy might jointly or
independently determine the angle transducer requirements. In the past decade, encoders have been increasingly taking
the place of inductive devices where the measurement of angles over a wide range is required. This is primarily due to
the fact that encoders are now achieving very high resolution in smaller sizes than was previously possible. These
advances in resolution are primarily due to improved encoder disk and detector technology along with developments in
interpolation techniques. Measurement accuracy, on the other hand, is primarily determined by mounting and bearing
eccentricity as it is with all angular measurement devices. For very demanding accuracy requirements, some type of
calibration of the assembled system may be the only solution, in which case transducer repeatability is paramount. This
paper describes a unique encoder-to-digital tracking converter concept for improving interpolation of optical encoders.
The new method relies on Fraunhofer diffraction models to correct the non-ideal sin/cos outputs of the encoder
detectors. Diffraction model concepts are used in the interpolation filters to predict the phase of non-ideal sin and cosine
encoder outputs. The new method also minimizes many of the open loop pre-processing requirements and assumptions
that limit interpolation accuracy and rate loop noise performance in ratiometric tracking converter designs.
Analog, non-mechanical beam-steerer with 80 degree field of regard
Show abstract
We are presenting a novel electro-optic architecture for non-mechanical laser beam steering with a demonstrated 80
degrees of steering in a chip-scale package. To our knowledge this is the largest angular coverage ever achieved by
non-mechanical means. Even higher angular deflections are possible with our architecture both in the plane of the
waveguide and out of the waveguide plane. In the present paper we describe the steering in the plane of the waveguide
leaving the out-of-plane scanning mechanism to be detailed in a subsequent publication. In order to realize this
performance we exploit an entirely new electro-optic architecture. Specifically, we utilize liquid crystals (LCs), which
have the largest known electro-optic response, as an active cladding layer in an LC-waveguide geometry. This
architecture exploits the benefits of liquid crystals (large tunable index), while circumventing historic LC limitations.
LC-waveguides provide unprecedented macroscopic (>1 mm) electro-optic phase delays. When combined with
patterned electrodes, this provides a truly analog, "Snell's-law-type" beam-steerer. With only two control electrodes we
have realized an 80 degree field of view for 1550 nm light. Furthermore, the waveguide geometry keeps the light from
ever coming into contact with an ITO electrode, thereby permitting high optical power transmission. Finally, the beamsteering
devices have sub-millisecond response times.
System Applications
Deriving predictive turbulence data models
Show abstract
We present a novel algorithm taking measurements of time, solar irradiance, wind speed, peak wind speed, temperature gradient, and relative humidity to derive a predictive differential equation for mean Cn2. Our method derives individual control terms and forcing functions by modeling macro-structure, micro-structure, and fine structure terms independently. The final model is suitable for analysis and able to be used as a baseline expectation model for in situ battlefield use for predictive optical correction or slewing, and possibly for mitigating the effects of wind shear on artillery shells downrange.
Active and attentive LADAR scanning for automatic target recognition
Show abstract
In this work we examine the dynamic implications of active and attentive scanning for LADAR based automatic
target/object recognition and show that a dynamically constrained, scanner based, ATR system's ability to
identify objects in real-time is improved through attentive scanning. By actively and attentively scanning only
salient regions of an image at the density required for recognition, the amount of time it takes to find a target
object in a random scene is reduced. A LADAR scanner's attention is guided by identifying areas-of-interest using
a visual saliency algorithm on electro-optical images of a scene to be scanned. Identified areas-of-interest are
inspected in order of decreasing saliency by scanning the most salient area and saccading to the next most salient
area until the object-of-interest is recognized. No ATR algorithms are used; instead, an object is considered to
be recognized when a threshold density of pixels-on-target is reached.
Poster Session
Energy efficient collaborative target tracking by Gaussian Rao-Blackwellised particle filter in wireless sensor networks
Show abstract
Target tracking is one of the main applications of wireless sensor networks. Optimized computation and energy
dissipation are critical requirements to save the limited resource of sensor nodes. A new energy efficient collaborative
target tracking algorithm via particle filtering (PF) is presented. Assuming the network infrastructure is cluster-based,
collaborative scheme is implemented through passing sensing and computation operations from one active cluster to
another and an event driven cluster reforming approach is also proposed for evening energy consumption distribution. At
each time step, measurements from three sensors are chosen at the current active cluster head to estimate and predict the
target motion and the results are propagated among cluster heads to the sink. In order to save the communication and
computation resource, we present a new particle filter algorithm called Gaussian Rao-Blackwellised Particle Filter
(GRBPF), which approximate the posterior distributions by Gaussians and only the mean and covariance of the
Gaussians need to be communicated among cluster heads when target enter another cluster. The GRBPF algorithm is
also more computation efficient than generic PF by dropping the resampling step. In the simulation comparison, a target
moves through the sensor network field and is tracked by both generic PF and the GRBPF algorithm using our proposed
collaborative scheme. The results show that the latter works very well for target tracking in wireless sensor networks and
the total communication burden is substantially reduced, so as to prolong the lifetime of wireless sensor networks.