Proceedings Volume 6712

Unconventional Imaging III

cover
Proceedings Volume 6712

Unconventional Imaging III

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 September 2007
Contents: 6 Sessions, 19 Papers, 0 Presentations
Conference: Optical Engineering + Applications 2007
Volume Number: 6712

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 6712
  • Active Imaging
  • Image Synthesis and Formation
  • Image Processing
  • Algorithm Optimization
  • Poster Session
Front Matter: Volume 6712
icon_mobile_dropdown
Front Matter: Volume 6712
This PDF file containes the front matter associated with SPIE Proceedings Volume 6712, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Active Imaging
icon_mobile_dropdown
Phase and frequency stability for synthetic aperture LADAR
Thomas J. Karr, John H. Glezen, Henry E. Lee
We analyze the phase and frequency stability requirements of the optical phase reference in a synthetic aperture LADAR imager. There are three distinct frequency regions: (a) "low" frequencies, below the characteristic frequency of the coherent synthetic aperture; (b) "mid" frequencies, up to four times the characteristic frequency of the coherent synthetic aperture; and (c) "high" frequencies, up to and beyond the pulse repetition frequency. The low-frequency requirement is driven by the allowable quadratic phase error, related to the resolution of the impulse response. Mid- and high-frequency requirements are driven by the allowable peak and integrated sidelobe levels. We estimate the upper bound and average PSDs required of the reference oscillator, parametrized by imaging geometry variables such as range, wavelength and synthetic resolution.
Signal-to-noise ratios of coherent imaging LADAR
We analyze the signal and noise in coherent active laser imaging systems (LADARs). The principle LADAR noise sources are shot noise in the detection process and target fluctuations in the reflected signal including speckle. The statistical relationships between signal and noise are similar for RADAR and LADAR. Four different metrics of "signal-to- noise" ratio are analyzed. C0 is the average number of detected photoelectrons neglecting speckle variance, C1 and C1' are the clutter-to-noise ratio (with and without bias) of the modulus image including speckle, and C2 is the clutter-to-noise ratio of the intensity image including speckle (always < 1). C1, C1', and C2 are determined by C0. Speckle (and other target fluctuations) affects coherent LADAR imagery similarly to the way it affects RADAR imagery, uses C0 as the principle signal-to-noise metric. C0 also is a valid measure of signal-to-noise ratio for coherent imaging LADAR, and analyses of coherent RADAR imagery noise can be applied to coherent LADAR imagery by replacing thermal noise kT with shot noise .
Technical assessment of a 100W CW fiber laser amplifier for Fourier telescopy imaging
Xiaojiang J. Pan, Dane W. Hult
Fourier Telescopy Imaging technique requires coherent laser beams generate high contrast interference fringes at remote object of interest. We have built a narrow spectral linewidth 100W CW fiber laser amplifier and measured its output beam coherent phase with respect to seeding laser's phase. We have demonstrated that the fiber laser amplifier that we built conserved the coherent phase of its seeding source. Clear interference fringes were obtained by an interferometer between the 100W fiber laser amplifier output and its seeding laser, without any active phase control element. Our assessment shows that fiber laser amplifiers meet Fourier Telescopy imaging requirements.
Image Synthesis and Formation
icon_mobile_dropdown
Image synthesis from a series of coherent frames of pupil intensity
Since it is known that the aperture of an imaging system limits spatial its resolution, it is desirable to form larger apertures. By taking advantage of the coherence properties of laser light, it is possible to form an optical synthetic aperture array from many smaller, monolithic apertures. By doing this, one can expect to obtain higher spatial resolution than from existing monolithic apertures. Since it is difficult to recover absolute phase of an optical field, it is desirable to form the synthetic aperture without interfering the light from the sub-apertures. This paper demonstrates a method of forming images using pupil plane intensity measurements of coherently illuminated scenes; a low resolution image will also be used to supply a starting estimate for the algortihm. From this data model, a maximum likelihood estimator is formed.
Image formation by use of continuously self-imaging gratings and diffractive axicons
Guillaume Druart, Nicolas Guérineau, Riad Haïdar, et al.
When illuminated by a plane wave, continuously self-imaging gratings (CSIGs) produce a field whose intensity profile is a propagation- and wavelength-invariant biperiodic array of bright spots. In the case of an extended and incoherent source, we show that CSIGs produce multiple images of the source. The fundamental properties of these gratings will be derived. In particular, methods to assess the image quality in angle of CSIGs will be introduced. It turns out that this new type of pinhole-array camera works on the same principle as diffractive axicons, which are known to produce wavelength-invariant nondiffracting beams. The formalism developed for CSIGs will be also extended to axicons. CSIGs and axicons both produce focal lines and can be robust in field, in compensation of a trade-off with the resolution. They also offer interesting properties in terms of compactness, achromaticity and long depth of focus for imaging systems. However, compared to classical imaging systems, they produce degraded images and an image processing is necessary to restore these images. Experimental images obtained with these components in the visible and infrared spectral ranges will be presented.
Hyper-spectral imaging using an optical fiber transition element
The Bi-static Optical Imaging Sensor (BOIS) is a 2-D imaging sensor that operates in the short-wave infra-red (SWIR) spectral regime over wavelengths from approximately 1.0 to 2.4 microns. The conceptual design of the sensor is based on integral field spectroscopy techniques. The BOIS sensor utilizes a fiber transition element consisting of multiple optical fibers to map the 2-D spatial input scene into a 1-D linear array for injection into a hyper-spectral imaging (HSI) sensor. The HSI spectrometer acquires fast time resolution snapshots (60 Hz) of the entire input target scene in numerous narrowband spectral channels covering the SWIR spectral band. The BOIS sensor is developed to spatially observe the fast time-evolving radiative signature of targets over a variety of spectral bands, thus simultaneously characterizing the overall scene in four dimensions: 2 spatial, wavelength, and time. We describe the successful design, operation, and testing of a laboratory prototype version of the BOIS sensor as well as further development of a field version of the sensor. The goal of the laboratory prototype BOIS sensor was to validate the proof-of-concept ability in the 4-D measurement concept of this unique design. We demonstrate the 2-D spatial remapping of the input scene (using SWIR laser and blackbody cavity sources) in multiple spectral channels from the spatial versus spectral pixel output of the HSI snapshot. We also describe algorithms developed in the data processing to retrieve temperatures of the observation scene from the hyper-spectral measurements.
A comparative study of algorithms for radar imaging from gapped data
Xiaojian Xu, Ruixue Luan, Li Jia, et al.
In ultra wideband (UWB) radar imagery, there are often cases where the radar's operating bandwidth is interrupted due to various reasons, either periodically or randomly. Such interruption produces phase history data gaps, which in turn result in artifacts in the image if conventional image reconstruction techniques are used. The higher level artifacts severely degrade the radar images. In this work, several novel techniques for artifacts suppression in gapped data imaging were discussed. These include: (1) A maximum entropy based gap filling technique using a modified Burg algorithm (MEBGFT); (2) An alternative iteration deconvolution based on minimum entropy (AIDME) and its modified version, a hybrid max-min entropy procedure; (3) A windowed coherent CLEAN algorithm; and (4) Two-dimensional (2-D) periodically-gapped Capon (PG-Capon) and APES (PG-APES) algorithms. Performance of various techniques is comparatively studied.
Estimating object shape from return flux measurements using a sinusoid beam dither method
Estimating the shape of a target can be an important task in long range surveillance applications. In certain situations obtaining an adequate spatial image of a target can be problematic, especially when the object size and distance requires an exceedingly large receiving aperture, or when significant atmospheric turbulence exists between the target and the receiver. This paper discusses a simple sinusoidal dithering laser illumination scheme that is capable of recovering low spatial frequency information about the object based on the reflected flux. The approach is analyzed in the presence of the corrupting influences of beam jitter. The performance of the method is tested through simulations and laboratory experiments.
Image Processing
icon_mobile_dropdown
Sampling artifacts, system design, and image processing
Imaging system resolution depends upon Fλ/d where F is the focal ratio, λ is the wavelength, and d is the detector size. Assuming a 100% fill factor, no aliasing occurs when Fλ/d ≥ 2. However, sampling artifacts are quite acceptable and most systems have Fλ/d < 1. Sampling artifacts are most noticeable with periodic targets (bar patterns, picket fences, plowed fields, etc). Since real targets are aperiodic, the sampling theorem (frequency domain analysis) does not directly provide guidance in algorithm development. Sampling creates an edge location ambiguity of one pixel. Phasing effects and edge ambiguity are often overlooked when designing image processing algorithms.
New approaches to image super-resolution beyond the diffraction limit
Eamon Barrett, David W. Tyler, Paul M. Payton, et al.
By techniques described in this paper, high spatial frequencies in a scene that are beyond the diffraction limit of an optical system can modulate user-generated low spatial frequency patterns prior to image formation and detection. The resulting low spatial frequency modulations or "moiré patterns" lie within the optical pass-band and will therefore be detectable. In favorable and controlled situations the scene's high spatial frequencies can be reconstructed from multiple images containing these low-frequency modulations and a single super-resolved image is synthesized. This approach to image super-resolution is feasible and does not violate well-established physical principles. The paper describes two phases of this ongoing research. In phase one, we investigate active remote imaging methods in which the low-frequency modulations are produced by controlling active illumination patterns projected onto the scene. In phase two we investigate passive remote imaging methods in which diffracting structures are interposed between the scene and the camera to modulate the light fields prior to image formation and detection.
Digital and optical superresolution of low-resolution image sequences
Digital superresolution (DSR) is the process of improving image resolution by overcoming the sampling limit of an imaging sensor, while optical superresolution (OSR) is the recovery of object spatial frequencies with magnitude higher than the diraction limit of the imaging optics. The present paper presents an integrated, Fisher-information-based analysis of the two superresolution (SR) processes applied to a sequence of sub-pixel shifted images of an object whose support is precisely known. As we shall see, prior information about the object support makes it possible to achieve OSR whose delity in fact improves with increasing size of the image sequence. The interplay of the two kinds of SR is further explored by varying the ratio of the detector sampling rate to the Nyquist rate.
Fourier image sharpness sensor for high-speed wavefront correction
Conventional adaptive optics systems use direct wavefront sensing such as the Shack-Hartmann sensor requiring a point source such as a natural star or a laser guide star. In situations where a natural guide star isn't available or a laser guide star isn't practical it is beneficial to use an indirect wavefront sensing approach based upon information in the image itself. We are developing an image sharpness sensor using information found in the Fourier spectrum of the image. Since high spatial frequencies contain information about the edges and fine detail of the image our premise is that maximizing the high spatial frequencies will sharpen the image. The Fourier transform of the image is generated optically (and essentially instantaneously) and then various spatial-frequency bands are filtered out with an opaque mask. The remaining Fourier spectrum is integrated optically resulting in a single sharpness signal from a photodetector. The collected sharpness value is used in a closed-loop to control the deformable mirror until the sharpness is maximized. We have created a simulation to study the sensor and its performance in an adaptive optics system; results and limitations will be discussed.
Real time phase diversity advanced image processing and wavefront sensing
Jean J. Dolne, Paul Menicucci, David Miccolis, et al.
This paper will describe a state-of-the-art approach to real time wavefront sensing and image enhancement. It will explore Boeing existing technology that realizes a 50 Hz frame rate (with a path to 1 KHz and higher). At this higher rate, Phase diversity will be readily applicable to compensate for distortions of large dynamic bandwidth such as those of the atmosphere. We will describe various challenges in aligning a two-camera phase diversity system. Such configurations make it almost impossible to process the captured images without additional upgrade in the algorithm to account for alignment errors. An example of an error is the relative misalignment of the two images, the "best-focus" and the diversity image where it is extremely hard to maintain alignment to less than a fraction of one pixel. We will show that the algorithm performance increases dramatically when we account for these errors in the phase diversity estimation process. Preliminary evaluation has assessed a NIIRS increase of ~ 3 from the "best-focus" to the enhanced image. Such a performance improvement would greatly increase the operating range (or, equivalently, decrease the weight) of many optical systems.
Algorithm Optimization
icon_mobile_dropdown
Piston phase error due to bending of delivery fiber
Delivering coherently in phase high power laser beams by fiber optics to transmitter telescopes is desirable to Fourier Telescopy (FT) Imaging technique. One of the requirements to such delivery fibers is to maintain its optical path length while being bent over 150 degrees. We have designed an apparatus and assessed the piston phase error versus both the radius of bending curvature and total bending angle of an optical fiber. The bending apparatus we built can evaluate delivery fiber, and the result of bending a single-mode fiber indicates that bending induced piston phase error can be neglected in the range of fiber optic diameter and radius of bending curvature that we are interested.
Maximum a-posteriori estimation of detector array non-uniformity and shift in sequences of short exposure images
David C. Dayton, John D. Gonglewski, Chad St. Arnauld
Most non-conventional approaches to image restoration of scenes observed over long atmospheric slant paths require multiple frames of short exposure images taken with low noise focal plane arrays. The individual pixels in these arrays often exhibit spatial non-uniformity in their response. In addition base motion jitter in the observing platform introduces a frame-to-frame linear shift that must be compensated for in order for the multi-frame restoration to be successful. In this paper we describe a maximum aposteriori parameter estimation approach to the simultaneous estimation of the frame-to-frame shifts and the array non-uniformity. This approach can be incorporated into an iterative algorithm and implemented in real time as the image data is being collected. We present a brief derivation of the algorithm as well as its application to actual image data collected from an airborne platform.
Evolutionary optimization and graphical models for robust recognition of behaviors in video imagery
Behavior analysis deals with understanding and parsing a video sequence to generate a high-level description of object actions and inter-object interactions. We describe a behavior recognition system that can model and detect spatio-temporal interactions between detected entities in a visual scene by using ideas from swarm optimization, fuzzy graphs, and object recognition. Two extensions of the Particle Swarm Optimization algorithm are explored, one uses classifier based object recognition to first detect entities in video scenes and then employs fuzzy graphs to model the associations while the second extension directly searches for graph based object associations. Our hierarchical generic event detection scheme uses fuzzy graphical models for representing the spatial associations as well as the temporal dynamics of the discovered scene entities. The spatial and temporal attributes of associated objects and groups of objects are handled in separate layers in the hierarchy. We also describe a new behavior specification language that helps the user easily describe the event that needs to be detected using simple linguistic or graphical queries. Preliminary results are promising and studies are underway to evaluate the use of the system in more complicated scenarios.
Swarm optimization methods for cognitive image analysis
We describe cognitive swarms, a new method for efficient visual recognition of objects in an image or video sequence that combines feature-based object classification with search mechanisms based on swarm intelligence. Our approach utilizes the particle swarm optimization algorithm (PSO), a population based evolutionary algorithm, which is effective for optimization of a wide range of functions. PSO searches a multi-dimensional solution space for a global optimum using a population or swarm of "particles" that cooperate using a low overhead communication scheme to search the solution space efficiently. We use a system of local and global swarms to detect and track multiple objects in video sequences. In our implementation, each particle in the swarm consists of a cascade of classifiers that utilize wavelet and edge-symmetry features to recognize objects. PSO update equations are used to control the movement of the swarm in solution space as the particles cooperate to find objects efficiently by maximizing classification confidence. By performing this optimization, the classifier swarm finds objects in the scene, determines their size, and optimizes other classifier parameters such as the object rotation angle. Map-based attention feedback is used to further increase the efficiency of cognitive swarms. Performance results are presented for human and vehicle detection.
Poster Session
icon_mobile_dropdown
Range estimation based on multiple imaging
Qingguo Yang, Liren Liu, De'an Liu, et al.
We present here a technique for passively sensing the three-dimensional structure of a scene using a single compact camera. The iris of a conventional camera is replaced by a mask with a prism array, forming multiple images, in which the vision disparities between the sub-images are extracted to compute depth. The arrangement of the prisms array can be regular or aperture coded. For a regular arrangement, each prism forms an independent sub-image that is viewed from a portion of the aperture, like many mini-cameras encapsulated in one aperture. If the angle of each prism is designed properly, the sub-images can be separated from each other on the plane of image detector, so that conventional methods of depth determination in stereo vision can be applied. On the other hand, the macro-prism array also can be arranged in aperture coded fashion. The coded aperture imaging method then can be employed here for depth sensing. Unlike the previous arrangement, the macro-prisms are positioned according to certain coding array, such as random array or non-redundant array, so that the images viewed from each prism are superimposed. In order to reconstruct the final depth image, a related decoding step for imaging processing is done. The passive range technique we introduced above should be considered as a multiple imaging problem. Since only a single compact camera is used, we avoid the need for extrinsic camera calibration, greatly reduced the computational demanding of correspondence problem. The use of refractive element, prism array, instead of pinhole array, can greatly increase the light transmission and the resolution of images.