Proceedings Volume 9987

Electro-Optical and Infrared Systems: Technology and Applications XIII

cover
Proceedings Volume 9987

Electro-Optical and Infrared Systems: Technology and Applications XIII

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 16 December 2016
Contents: 9 Sessions, 39 Papers, 23 Presentations
Conference: SPIE Security + Defence 2016
Volume Number: 9987

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9987
  • Active Imaging
  • Electro-Optical System Design, Technology and Applications I
  • Electro-Optical System Design, Technology and Applications II
  • Image Processing
  • Electro-Optical Systems: Performance Evaluation
  • System Modelling
  • Detectors
  • Posters--Wednesday
Front Matter: Volume 9987
icon_mobile_dropdown
Front Matter: Volume 9987
This PDF file contains the front matter associated with SPIE Proceedings Volume 9987 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Active Imaging
icon_mobile_dropdown
On the real performance of SWIR range-gated active imaging in scattering media
Frank Christnacher, Stéphane Schertzer, Nicolas Metzger, et al.
Range-gated active imaging is a well-known technique used for night vision or for vision enhancement in scattering environments. The elimination of the backscattering effects leads to a significant increase in the vision range in scattering environments. Surprisingly, even if a lot of authors estimate that active imaging brings a gain in range when used in scattering environments, there are no studies which systematically investigate and quantify the real gain provided by range gating in comparison with classical imaging systems and in different controlled obscurant densities.

In this paper, we thoroughly examined the performance enhancement of laser range gating in comparison with a color camera representing the human vision. On the one hand, we studied the influence of different types of obscurants and showed that the type of obscurant leads to very different results. On the other hand, we examined the influence of different technical parameters on the laser side and on the camera side. The influence of range gating and of the gate shape was studied. These experiments led us to conclude that it is necessary to combine a short laser pulse with a short camera integration time to acquire contrasted images in dense scattering media.
Laser illumination and EO systems for covert surveillance from NIR to SWIR and beyond
Edgaras Dvinelis, Tomas Žukauskas, Mindaugas Kaušylas, et al.
One of the most important factor of success in battlefield is the ability to remain undetected by the opposing forces while also having an ability to detect all possible threats. Illumination and pointing systems working in NIR and SWIR bands are presented. Wavelengths up to 1100 nm can be registered by newest generation image intensifier tubes, CCD and EMCCD sensors. Image intensifier tubes of generation III or older are only limited up to wavelength of ~900 nm [1]. Longer wavelengths of 1550 nm and 1625 nm are designed to be used with SWIR electro-optical systems and they cannot be detected by any standard night vision system. Long range SWIR illuminators and pointers have beam divergences down to 1 mrad and optical powers up to 1.5 W. Due to lower atmospheric scattering SWIR illuminators and pointers can be used at extremely long distances up to 10s of km and even further during heavy weather conditions. Longer wavelengths of 2100 nm and 2450 nm are also presented, this spectrum band is of great interest for direct infrared countermeasure (DIRCM) applications.

State-of-the-art SWIR and LWIR electro-optical systems are presented. Sensitive InGaAs sensors coupled with “fast" (low F/#) optical lenses can provide complete night vision, detection of all NIR and SWIR laser lines, penetration through smoke, dust and fog. Finally beyond-state-of-the-art uncooled micro-bolometer LWIR systems are presented featuring ultra-high sensor sensitivities of 20 mK.
RGB-NIR active gated imaging
Nick Spooren, Bert Geelen, Klaas Tack, et al.
This paper presents multispectral active gated imaging in relation to the transportation and security fields. Active gated imaging is based on a fast gated camera and pulsed illuminator, synchronized in the time domain to provide range based images. We have developed a multispectral pattern deposited on a gated CMOS Image Sensor (CIS) with a pulsed Near Infrared VCSEL module. This paper will cover the component-level description of the multispectral gated CIS including the camera and illuminator units. Furthermore, the design considerations and characterization results of the spectral filters are presented together with a newly developed image processing method.
Non-destructive testing of composite materials used in military applications by eddy current thermography method
Eddy current thermography is a new NDT-technique for the detection of cracks in electro conductive materials. It combines the well-established inspection techniques of eddy current testing and thermography. The technique uses induced eddy currents to heat the sample being tested and defect detection is based on the changes of induced eddy currents flows revealed by thermal visualization captured by an infrared camera. The advantage of this method is to use the high performance of eddy current testing that eliminates the known problem of the edge effect. Especially for components of complex geometry this is an important factor which may overcome the increased expense for inspection set-up. The paper presents the possibility of applying eddy current thermography method for detecting defects in ballistic covers made of carbon fiber reinforced composites used in the construction of military vehicles.
Non-destructive testing of mid-IR optical fiber using infrared imaging
Marc-André Gagnon, Vincent Fortin, Réal Vallée, et al.
Optical fiber lasers offers the advantage of being relatively compact and efficient. However, the materials such as fluoride and chalcogenide glasses used for their fabrication must be exempt of defects in order to make efficient laser systems. However, most existing quality control techniques are not compatible with chalcogenide fibers because of their limited transparency in the visible spectral range. For this reason, the Université Laval’s Centre d’optique, photonique et laser (COPL), in Quebec City, Canada, has developed a novel non-destructive testing (NDT) methodology based on infrared imaging to address this problem. The results show how this simple screening technique eases the selection of high-quality fibers for the design of high-power mid-IR lasers.
Active vision systems based on powerful laser diode matrixes: design peculiarities and vision range
Denis V. Shabrov, Vladimir V. Kabanov, Yahor V. Lebiadok
The present paper is aimed to development of the powerful illumination module based on powerful AlGaAs/GaAs laser diode matrix with short laser pulses, high repetition rate, given radiation divergence characteristics and stabilized parameters. Developed modification of the powerful illumination module gives the chance to realize powerful pulse laser radiation on the wavelength of 846 nm with the effective form of the laser pulse, wide range of duration from tens to hundreds of nanoseconds and the frequency of repetition rate of pulses up to 10 kHz in the set space angle of radiation 27° × 8°.
Electro-Optical System Design, Technology and Applications I
icon_mobile_dropdown
SSUSI-lite: next generation far-ultraviolet sensor for characterizing geospace
Larry J. Paxton, John E. Hicks, Matthew P. Grey, et al.
SSUSI-Lite is an update of an existing sensor, SSUSI. The current generation of Defense Meteorological Satellite Program (DMSP) satellites (Block 5D3) includes a hyperspectral, cross-tracking imaging spectrograph known as the Special Sensor Ultraviolet Spectrographic Imager (SSUSI). SSUSI has been part of the DMSP program since 1990. SSUSI is designed to provide space weather information such as: auroral imagery, ionospheric electron density profiles, and neutral density composition changes. The sensors that are flying today (see http://ssusi.jhuapl.edu) were designed in 1990 - 1992. There have been some significant improvements in flight hardware since then. The SSUSI-Lite instrument is more capable than SSUSI yet consumes ½ the power and is ½ the mass. The total package count (and as a consequence, integration cost and difficulty) was reduced from 7 to 2. The scan mechanism was redesigned and tested and is a factor of 10 better. SSUSI-Lite can be flown as a hosted payload or a rideshare – it only needs about 10 watts and weighs under 10 kg. We will show results from tests of an interesting intensified position sensitive anode pulse counting detector system. We use this approach because the SSUSI sensor operates in the far ultraviolet – from about 110 to 180 nm or 0.11 to 0.18 microns.
Concept for an airborne real-time ISR system with multi-sensor 3D data acquisition
Laura Haraké, Hendrik Schilling, Christian Blohm, et al.
In modern aerial Intelligence, Surveillance and Reconnaissance operations, precise 3D information becomes inevitable for increased situation awareness. In particular, object geometries represented by texturized digital surface models constitute an alternative to a pure evaluation of radiometric measurements. Besides the 3D data's level of detail aspect, its availability is time-relevant in order to make quick decisions.

Expanding the concept of our preceding remote sensing platform developed together with OHB System AG and Geosystems GmbH, in this paper we present an airborne multi-sensor system based on a motor glider equipped with two wing pods; one carries the sensors, whereas the second pod downlinks sensor data to a connected ground control station by using the Aerial Reconnaissance Data System of OHB. An uplink is created to receive remote commands from the manned mobile ground control station, which on its part processes and evaluates incoming sensor data. The system allows the integration of efficient image processing and machine learning algorithms.

In this work, we introduce a near real-time approach for the acquisition of a texturized 3D data model with the help of an airborne laser scanner and four high-resolution multi-spectral (RGB, near-infrared) cameras. Image sequences from nadir and off-nadir cameras permit to generate dense point clouds and to texturize also facades of buildings. The ground control station distributes processed 3D data over a linked geoinformation system with web capabilities to off-site decision-makers. As the accurate acquisition of sensor data requires boresight calibrated sensors, we additionally examine the first steps of a camera calibration workflow.
Electro-Optical System Design, Technology and Applications II
icon_mobile_dropdown
Electro-optical muzzle flash detection
Jürgen Krieg, Christian Eisele, Dirk Seiffer
Localizing a shooter in a complex scenario is a difficult task. Acoustic sensors can be used to detect blast waves. Radar technology permits detection of the projectile. A third method is to detect the muzzle flash using electro-optical devices. Detection of muzzle flash events is possible with focal plane arrays, line and single element detectors. In this paper, we will show that the detection of a muzzle flash works well in the shortwave infrared spectral range. Important for the acceptance of an operational warning system in daily use is a very low false alarm rate. Using data from a detector with a high sampling rate the temporal signature of a potential muzzle flash event can be analyzed and the false alarm rate can be reduced. Another important issue is the realization of an omnidirectional view required on an operational level. It will be shown that a combination of single element detectors and simple optics in an appropriate configuration is a capable solution.
Middle infrared hyperspectral imaging of adhesives, varnishes and inks on Al plate and papers by using a bolometer camera and an imaging type interferometer
Shigeru Sugawara, Mitsuhiro Yoshida, Tsubasa Saito, et al.
We built a hyperspectral imaging apparatus using middle-infrared light of 8–14 μm, which has a strong ability to identify organic materials, and attempted visualization of the distribution of organic materials that could not be identified by a naked eye. For this purpose, we utilized a low-cost bolometer camera (Nippon Avionics co., ltd. C100V, Japan) for its easy availability rather than an expensive mercury cadmium telluride (MCT) array sensor. To compensate for the low sensitivity of this bolometer, we adopted a Fourier-type spectroscopic system (Aoi Electronics co. ltd., Japan) using an imaging interferometer devised by the Kagawa University, Japan; this interferometer has higher light-utilization efficiency than Michelson interferometers, which are used in popular interferometry techniques.

In this study, 4 types of adhesives, 9 types of varnishes and more than 50 types of inks were put on Al plates of size 10 cm × 10 cm and were used as samples. Glossy paper for printing photos with an inkjet printer was also used as a sample. A 300 °C black body of size 15 cm × 15 cm was used as a light source. Spectra of 320 × 240 points were measured at a wavelength resolution of approximately 9 cm−1. The mirror was scanned only once. The measurement time was approximately 30 s.

Hyperspectral images of adhesives, varnishes and inks on Al plate and paper were successfully measured. Spectra over a 5 × 5-pixel neighborhoods were averaged, and the averaged spectra were compared with those measured by a commercially available Fourier transform infrared (FTIR) spectroscopy. The averaged and measured spectra had absorption peaks at the same wavelengths. Furthermore, by analyzing the measured spectra, the distribution of substances invisible to the naked eye was visualized. Our results show that if low-absorbance organic materials are put on a high-reflectance surface such as an Al plate, the middle-infrared hyperspectral imaging could be measured using a bolometer. Additionally, hyperspectral imaging of high-reflectance paper, such as glossy paper, could also be measured. Because a bolometer camera is much cheaper than an MCT array, hyperspectral imaging with such a camera has many potential applications. Moreover, an imaging interferometer, with its high efficiency of light utilization, is very suitable for the purpose.
Challenges and solutions for high performance SWIR lens design
M. C. Gardner, P. J. Rogers, M. F. Wilde, et al.
Shortwave infrared (SWIR) cameras are becoming increasingly attractive due to the improving size, resolution and decreasing prices of InGaAs focal plane arrays (FPAs). The rapid development of competitively priced HD performance SWIR cameras has not been matched in SWIR imaging lenses with the result that the lens is now more likely to be the limiting factor in imaging quality than the FPA. Adapting existing lens designs from the visible region by re-coating for SWIR will improve total transmission but diminished image quality metrics such as MTF, and in particular large field angle performance such as vignetting, field curvature and distortion are serious consequences.

To meet this challenge original SWIR solutions are presented including a wide field of view fixed focal length lens for commercial machine vision (CMV) and a wide angle, small, lightweight defence lens and their relevant design considerations discussed. Issues restricting suitable glass types will be examined. The index and dispersion properties at SWIR wavelengths can differ significantly from their visible values resulting in unusual glass combinations when matching doublet elements. Materials chosen simultaneously allow athermalization of the design as well as containing matched CTEs in the elements of doublets.

Recently, thinned backside-illuminated InGaAs devices have made Vis.SWIR cameras viable. The SWIR band is sufficiently close to the visible that the same constituent materials can be used for AR coatings covering both bands. Keeping the lens short and mass low can easily result in high incidence angles which in turn complicates coating design, especially when extended beyond SWIR into the visible band. This paper also explores the potential performance of wideband Vis.SWIR AR coatings.
Laser-induced damage threshold of camera sensors and micro-opto-electro-mechanical systems
The continuous development of laser systems towards more compact and efficient devices constitutes an increasing threat to electro-optical imaging sensors such as complementary metal-oxide-semiconductors (CMOS) and charge-coupled devices (CCD). These types of electronic sensors are used in day-to-day life but also in military or civil security applications. In camera systems dedicated to specific tasks, also micro-opto-electro-mechanical systems (MOEMS) like a digital micromirror device (DMD) are part of the optical setup. In such systems, the DMD can be located at an intermediate focal plane of the optics and it is also susceptible to laser damage. The goal of our work is to enhance the knowledge of damaging effects on such devices exposed to laser light.

The experimental setup for the investigation of laser-induced damage is described in detail. As laser sources both pulsed lasers and continuous-wave (CW) lasers are used. The laser-induced damage threshold (LIDT) is determined by the single-shot method by increasing the pulse energy from pulse to pulse or in the case of CW-lasers, by increasing the laser power.

Furthermore, we investigate the morphology of laser-induced damage patterns and the dependence of the number of destructed device elements on the laser pulse energy or laser power. In addition to the destruction of single pixels, we observe aftereffects like persisting dead columns or rows of pixels in the sensor image.
Image Processing
icon_mobile_dropdown
Turbulence mitigation methods for sea scenarios
Judith Dijk, Klamer Schutte, Robert P. J. Nieuwenhuizen
Visual and infrared imagery is degraded by turbulence caused by atmospheric conditions. Because the degradation gets worse for longer distances, turbulence especially hampers long range observation. At sea this turbulence affects classification and identification of ships and other objects. State of the art software based processing algorithms assuming a static background assumption will fail in such conditions because of the non-static sea background. Therefore, we propose an adapted processing chain aiming to provide optimal turbulence correction for ships seen in the camera view. First we propose to use standard object detection and tracking methods for an indication of the location of the ship. Subsequently, image registration is performed within the ship’s region of interest, covering only the ship of interest. After this region of interest registration, standard turbulence mitigation software can be applied to the region of interest. For ships with other movement than translation only we propose a two-step motion estimation using local optical flow.

In this paper we show results of this processing chain for sea scenarios using our TNO turbulence mitigation method. Ship data is processed using the algorithm proposed above and the results are analyzed by both human observation and by image analysis. The improvement of the imagery is qualitatively shown by examining details which cannot be seen without processing and can be seen with processing. Quantitatively, the improvement is related to the energy per spatial frequency in the original and processed images and the signal to noise improvement. This provides a model for the improvement of the results, and is related to the improvement of the classification and identification range. The results show that with this novel approach the classification and identification range of ships is improved.
Improvements in ship tracking in electro-optical and infrared data using appearance
Sebastiaan P. van den Broek, Robert P. J. Nieuwenhuizen, Noëlle M. Fischer, et al.
Naval ships have camera systems available to assist in performing their operational tasks. Some include automatic detection and tracking, assisting an operator by keeping a ship in view or by keeping collected information about ships. Tracking errors limit the use of camera information. When keeping a ship in view, an operator has to re-target a tracked ship if it is no longer automatically followed due to a track break, or if it is out of view. When following several ships, track errors require the operator to re-label objects.

Trackers make errors, for example, due to inaccuracies in detection, or motion that is not modeled correctly. Instead of improving this tracking using the limited information available from a single measurement, we propose a method where tracks are merged at a later stage, using information over a small interval. This merging is based on spatiotemporal matching. To limit incorrect connections, unlikely connections are identified and excluded. For this we propose two different approaches: spatiotemporal cost functions are used to exclude connections with unlikely motion and appearance cost functions are used to exclude connecting tracks of dissimilar objects. Next to this, spatiotemporal cost functions are also used to select tracks for merging. For the appearance filtering we investigated different descriptive features and developed a method for indicating similarity between tracks. This method handles variations in features due to noisy detections and changes in appearance.

We tested this method on real data with nine different targets. It is shown that track merging results in a significant reduction in number of tracks per ship. With our method we significantly reduce incorrect track merges that would occur using naïve merging functions.
Multi-temporal anomaly detection technique
I. Dayan, S. Maman, D. G. Blumberg, et al.
In this paper, we present a variation on the LRX (Local RX) algorithm for detecting anomalies in multi-temporal images. Our algorithm assigns a relative weight to the Mahalanobis distance according to the number of times it appears in an image. Standard transitions between pixels are therefore not viewed as anomalous; unusual transitions are assigned proportionally higher weights. Experimental results using our proposed algorithm vs previous algorithms on multitemporal datasets show a significant improvement.
Classifying objects in LWIR imagery via CNNs
The aim of the presented work is to demonstrate enhanced target recognition and improved false alarm rates for a mid to long range detection system, utilising a Long Wave Infrared (LWIR) sensor. By exploiting high quality thermal image data and recent techniques in machine learning, the system can provide automatic target recognition capabilities. A Convolutional Neural Network (CNN) is trained and the classifier achieves an overall accuracy of > 95% for 6 object classes related to land defence. While the highly accurate CNN struggles to recognise long range target classes, due to low signal quality, robust target discrimination is achieved for challenging candidates. The overall performance of the methodology presented is assessed using human ground truth information, generating classifier evaluation metrics for thermal image sequences.
Real-time person detection in low-resolution thermal infrared imagery with MSER and CNNs
Christian Herrmann, Thomas Müller, Dieter Willersinn, et al.
In many camera-based systems, person detection and localization is an important step for safety and security applications such as search and rescue, reconnaissance, surveillance, or driver assistance. Long-wave infrared (LWIR) imagery promises to simplify this task because it is less affected by background clutter or illumination changes. In contrast to a lot of related work, we make no assumptions about any movement of persons or the camera, i.e. persons may stand still and the camera may move or any combination thereof. Furthermore, persons may appear arbitrarily in near or far distances to the camera leading to low-resolution persons in far distances. To address this task, we propose a two-stage system, including a proposal generation method and a classifier to verify, if the detected proposals really are persons. In contradiction to use all possible proposals as with sliding window approaches, we apply Maximally Stable Extremal Regions (MSER) and classify the detected proposals afterwards with a Convolutional Neural Network (CNN). The MSER algorithm acts as a hot spot detector when applied to LWIR imagery. Because the body temperature of persons is usually higher than the background, they appear as hot spots in the image. However, the MSER algorithm is unable to distinguish between different kinds of hot spots. Thus, all further LWIR sources such as windows, animals or vehicles will be detected, too. Still by applying MSER, the number of proposals is reduced significantly in comparison to a sliding window approach which allows employing the high discriminative capabilities of deep neural networks classifiers that were recently shown in several applications such as face recognition or image content classification. We suggest using a CNN as classifier for the detected hot spots and train it to discriminate between person hot spots and all further hot spots. We specifically design a CNN that is suitable for the low-resolution person hot spots that are common with LWIR imagery applications and is capable of fast classification. Evaluation on several different LWIR person detection datasets shows an error rate reduction of up to 80 percent compared to previous approaches consisting of MSER, local image descriptors and a standard classifier such as an SVM or boosted decision trees. Further time measurements show that the proposed processing chain is capable of real-time person detection in LWIR camera streams.
Improved colour matching technique for fused nighttime imagery with daytime colours
Previously, we presented a method for applying daytime colours to fused nighttime (e.g., intensified and LWIR) imagery (Toet and Hogervorst, Opt.Eng. 51(1), 2012). Our colour mapping not only imparts a natural daylight appearance to multiband nighttime images but also enhances the contrast and visibility of otherwise obscured details. As a result, this colourizing method leads to increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness (Toet e.a., Opt.Eng.53(4), 2014). A crucial step in this colouring process is the choice of a suitable colour mapping scheme. When daytime colour images and multiband sensor images of the same scene are available the colour mapping can be derived from matching image samples (i.e., by relating colour values to sensor signal intensities). When no exact matching reference images are available the colour transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image (Toet, Info. Fus. 4(3), 2003). In the current study we investigated new colour fusion schemes that combine the advantages of the both methods, using the correspondence between multiband sensor values and daytime colours (1st method) in a smooth transformation (2nd method). We designed and evaluated three new fusion schemes that focus on: i) a closer match with the daytime luminances, ii) improved saliency of hot targets and iii) improved discriminability of materials
Electro-Optical Systems: Performance Evaluation
icon_mobile_dropdown
How to pass a sensor acceptance test: using the gap between acceptance criteria and operational performance
When acquiring a new imaging system and operational task performance is a critical factor for success, it is necessary to specify minimum acceptance requirements that need to be met using a sensor performance model and/or performance tests. Currently, there exist a variety of models and test from different origin (defense, security, road safety, optometry) and they all do different predictions. This study reviews a number of frequently used methods and shows the effects that small changes in procedure or threshold criteria can have on the outcome of a test. For example, a system may meet the acceptance requirements but not satisfy the needs for the operational task, or the choice of test may determine the rank order of candidate sensors.

The goal of the paper is to make people aware of the pitfalls associated with the acquisition process, by i) illustrating potential tricks to have a system accepted that is actually not suited for the operational task, and ii) providing tips to avoid this unwanted situation.
SWIR, VIS and LWIR observer performance against handheld objects: a comparison
The short wave infrared spectral range caused interest to be used in day and night time military and security applications in the last years. This necessitates performance assessment of SWIR imaging equipment in comparison to the one operating in the visual (VIS) and thermal infrared (LWIR) spectral range. In the military context (nominal) range is the main performance criteria. Discriminating friend from foe is one of the main tasks in today’s asymmetric scenarios and so personnel, human activities and handheld objects are used as targets to estimate ranges. The later was also used for an experiment at Fraunhofer IOSB to get a first impression how the SWIR performs compared to VIS and LWIR. A human consecutively carrying one of nine different civil or military objects was recorded from five different ranges in the three spectral ranges. For the visual spectral range a 3-chip color-camera was used, the SWIR range was covered by an InGaAs-camera and the LWIR by an uncooled bolometer. It was ascertained that the nominal spatial resolution of the three cameras was in the same magnitude in order to enable an unbiased assessment. Daytime conditions were selected for data acquisition to separate the observer performance from illumination conditions and to some extend also camera performance. From the recorded data, a perception experiment was prepared. It was conducted as a nine-alternative forced choice, unlimited observation time test with 15 observers participating. Before the experiment, the observers were trained on close range target data. Outcome of the experiment was the average probability of identification versus range between camera and target. The comparison of the range performance achieved in the three spectral bands gave a mixed result. On one hand a ranking VIS / SWIR / LWIR in decreasing order can be seen in the data, but on the other hand only the difference between VIS and the other bands is statistically significant. Additionally it was not possible to explain the outcome with typical contrast metrics. Probably form is more important than contrast here as long as the contrast is generally high enough. These results were unexpected and need further exploration.
Feature long axis size and local luminance contrast determine ship target acquisition performance: strong evidence for the TOD case
Piet Bijl, Alexander Toet, Frank L. Kooi
Visual images of a civilian target ship on a sea background were produced using a CAD model. The total set consisted of 264 images and included 3 different color schemes, 2 ship viewing aspects, 5 sun illumination conditions, 2 sea reflection values, 2 ship positions with respect to the horizon and 3 values of atmospheric contrast reduction. In a perception experiment, the images were presented on a display in a long darkened corridor. Observers were asked to indicate the range at which they were able to detect the ship and classify the following 5 ship elements: accommodation, funnel, hull, mast, and hat above the bridge. This resulted in a total of 1584 Target Acquisition (TA) range estimates for two observers. Next, the ship contour, ship elements and corresponding TA ranges were analyzed applying several feature size and contrast measures. Most data coincide on a contrast versus angular size plot using (1) the long axis as characteristic ship/ship feature size and (2) local Weber contrast as characteristic ship/ship feature contrast. Finally, the data were compared with a variety of visual performance functions assumed to be representative for Target Acquisition: the TOD (Triangle Orientation Discrimination), MRC (Minimum Resolvable Contrast), CTF (Contrast Threshold Function), TTP (Targeting Task Performance) metric and circular disc detection data for the unaided eye (Blackwell). The results provide strong evidence for the TOD case: both position and slope of the TOD curve match the ship detection and classification data without any free parameter. In contrast, the MRC and CTF are too steep, the TTP and disc detection curves are too shallow and all these curves need an overall scaling factor in order to coincide with the ship and ship feature recognition data.
Visual acuity performance of several observers using the triangle orientation discrimination methodology
Julia Mündel, Bärbel Geisel, Katrin Braesicke, et al.
The Triangle Orientation Discrimination (TOD) is one of several methods to characterize electro-optical system performance. It is conducted by presenting an equilateral triangle pointing either up, down, right or left, to an observer who is forced to judge the direction. Based from the probability on the correctness of the answers in dependence of the size of the triangle, the quality of the system can be assessed. In order to gain experience with this method it was applied here to test Fraunhofer IOSBs new equipment for perception experiments.

An experiment with four observers, ten contrast levels and six triangle sizes was conducted. Its results were analysed for observer performance versus time, illumination conditions and variations in the TOD-curve. Furthermore, different approaches on analysing the data were compared.

The outcome showed the observers performance variation on different days to be statistically insignificant. In addition, the illumination conditions had no statistically significant influence on the result. Interestingly a larger difference was found between the observers. Although they had normal or corrected to normal eyesight, different visual acuity is the only explanation for the differences. This leads to the necessity to check observers of perception experiments more closely. The different approaches to curve fitting also gave variations, which would result in different ranges when applied in camera assessment. Here a standardization seems necessary when the method is applied in analytical models for imaging systems.
System Modelling
icon_mobile_dropdown
The prediction of the optical contrast of air-borne targets against the night-sky background for Photopic and NVG sensors
Stephan Havemann, Gerald Wong
The Havemann-Taylor Fast Radiative Transfer Code (HT-FRTC) represents transmittances, radiances and fluxes by principal components that cover the spectra at very high resolution, allowing fast highly-resolved pseudo line-by-line, hyperspectral and broadband simulations across the electromagnetic spectrum form the microwave to the ultraviolet for satellite-based, airborne and ground-based sensors. HT-FRTC models clear atmospheres and those containing clouds and aerosols, as well as any surface (land/sea/man-made). The HT-FRTC has been used operationally in the NEON Tactical Decision Aid (TDA) since 2008. The TDA combines the HT-FRTC with a thermal contrast model and an NWP model forecast data feed to predict the apparent thermal contrast between different surfaces and ground-based targets in the thermal and short-wave IR. The new objective here is to predict the optical contrast of air-borne targets under realistic night-time scenarios in the Photopic and NVG parts of the spectrum. This requires the inclusion of all the relevant radiation sources, which include twilight, moonlight, starlight, airglow and cultural light. A completely new exact scattering code has been developed which allows the straight-forward addition of any number of direct and diffuse sources anywhere in the atmosphere. The new code solves the radiative transfer equation iteratively and is faster than the previous solution. Simulations of scenarios with different light levels, from situations during a full moon to a moonless night with very low light levels and a situation with cultural light from a town are presented. The impact of surface reflectance and target reflectance is investigated.
A first order analytical TOD sensor performance model
In this paper we present a new, analytical TOD model. The model provides an estimate of the TOD curve for an Optical, Electro-Optical or Thermal Infrared imaging system based on a limited number of essential system parameters. This is useful to get a quick Target Acquisition range prediction but also serves as a first order input to an image-based TOD simulation model. The model is based on a human observer performance dataset on TOD test patterns, systematically degraded by simulated sensor effects. The model is validated against a number of historical TOD tests on visual and thermal camera systems and provides excellent performance predictions.
Analysis on the detection capability of the space-based camera for the space debris
Chao Wang, Fugang Wang, Zhao Ye, et al.
Based on the maximum detection range, the detection capability of space-based camera for space debris is analyzed in the paper. We perform grid generation method on the debris target and analyze the shadowing effects among the grids, building the geometry modeling of cone target sequentially. The calculation model of optical infrared characteristics is established, taking into consideration the target self-radiation and radiation reflection characteristics of the material on surface. The radiation energy of the target is only depended on the reflection of earth’s radiation and its self-radiation in the simulation proposed in the paper. Based on the maximum detection range formula, the numerical simulation presented shows that when the space-based target radiation intensity is 21.54W/sr and optical system aperture is 0.5m, the maximum detection range is 17279km. The simulation results theoretically contribute to the estimation of camera parameters and analysis on the detection capability.
Ray tracing simulation of aero-optical effect using multiple gradient index layer
We present a new ray tracing simulation of aero-optical effect through anisotropic inhomogeneous media as supersonic flow field surrounds a projectile. The new method uses multiple gradient-index (GRIN) layers for construction of the anisotropic inhomogeneous media and ray tracing simulation. The cone-shaped projectile studied has 19° semi-vertical angle; a sapphire window is parallel to the cone angle; and an optical system of the projectile was assumed via paraxial optics and infrared image detector. The condition for the steady-state solver conducted through computational fluid dynamics (CFD) included Mach numbers 4 and 6 in speed, 25 km altitude, and 0° angle of attack (AoA). The grid refractive index of the flow field via CFD analysis and Gladstone-Dale relation was discretized into equally spaced layers which are parallel with the projectile’s window. Each layer was modeled as a form of 2D polynomial by fitting the refractive index distribution. The light source of ray set generated 3,228 rays for varying line of sight (LOS) from 10° to 40°. Ray tracing simulation adopted the Snell’s law in 3D to compute the paths of skew rays in the GRIN layers. The results show that optical path difference (OPD) and boresight error (BSE) decreases exponentially as LOS increases. The variation of refractive index decreases, as the speed of flow field increases the OPD and its rate of decay at Mach number 6 in speed has somewhat larger value than at Mach number 4 in speed. Compared with the ray equation method, at Mach number 4 and 10° LOS, the new method shows good agreement, generated 0.33% of relative root-mean-square (RMS) OPD difference and 0.22% of relative BSE difference. Moreover, the simulation time of the new method was more than 20,000 times faster than the conventional ray equation method. The technical detail of the new method and simulation is presented with results and implication.
Detectors
icon_mobile_dropdown
True differential pyroelectric infrared detector with improved D* test results with analysis
Pyroelectric infrared detectors are used in many commercial and industrial applications. Typically these devices have been “single ended” and thus any electronic perturbation from a non-detector related noise source such as line frequency interference or microprocessor clock and other sources of electronic noise can be coupled onto the detector’s output signal. We have solved this problem by employing a rather unique connection which also provides an increase in the signal to noise of any pyroelectric detector by a factor of the square root of 2 or by about 1.41 times greater than devices not utilizing this connection.

Many devices using this connection have been built, fully tested and the data analyzed which provide a true differential or double ended output and the increase in D* as predicted. This scheme will work with any pyroelectric material (LTO, DLATGS, PLZT, PVDF etc.) with current or voltage mode impedance conversion and configurations such as parallel or series with and without temperature fluctuation compensation and of course with standard single elements. This talk will present this data and conclusions regarding the approach.
A new monolithic approach for mid-IR focal plane arrays
Chengzhi Xie, Vincenzo Pusino, Ata Khalid, et al.
Antimonide-based photodetectors have recently been grown on a GaAs substrate by molecular beam epitaxy (MBE) and reported to have comparable performance to the devices grown on more expensive InSb and GaSb substrates. We demonstrated that GaAs, in addition to providing a cost saving substrate for antimonide-based semiconductor growth, can be used as a functional material to fabricate transistors and realize addressing circuits for the heterogeneously grown photodetectors. Based on co-integration of a GaAs MESFET with an InSb photodiode, we recently reported the first demonstration of a switchable and mid-IR sensible photo-pixel on a GaAs substrate that is suitable for large-scale integration into a focal plane array. In this work we report on the fabrication steps that we had to develop to deliver the integrated photo-pixel. Various highly controllable etch processes, both wet and dry etch based, were established for distinct material layers. Moreover, in order to avoid thermally-induced damage to the InSb detectors, a low temperature annealed Ohmic contact was used, and the processing temperature never exceeded 180 °C. Furthermore, since there is a considerable etch step (> 6 μm) that metal must straddle in order to interconnect the fabricated devices, we developed an intermediate step using polyimide to provide a smoothing section between the lower MESFET and upper photodiode regions of the device. This heterogeneous technology creates great potential to realize a new type of monolithic focal plane array of addressable pixels for imaging in the medium wavelength infrared range without the need for flip-chip bonding to a CMOS readout chip.
Advances in the characterization of InAs/GaSb superlattice infrared photodetectors
A. Wörl, V. Daumer, T. Hugger, et al.
This paper reports on advances in the electro-optical characterization of InAs/GaSb short-period superlattice infrared photodetectors with cut-off wavelengths in the mid-wavelength and long-wavelength infrared ranges. To facilitate in-line monitoring of the electro-optical device performance at different processing stages we have integrated a semi-automated cryogenic wafer prober in our process line. The prober is configured for measuring current-voltage characteristics of individual photodiodes at 77 K. We employ it to compile a spatial map of the dark current density of a superlattice sample with a cut-off wavelength around 5 μm patterned into a regular array of 1760 quadratic mesa diodes with a pitch of 370 μm and side lengths varying from 60 to 350 μm. The different perimeter-to-area ratios make it possible to separate bulk current from sidewall current contributions. We find a sidewall contribution to the dark current of 1.2×10-11 A/cm and a corrected bulk dark current density of 1.1×10-7 A/cm2, both at 200 mV reverse bias voltage. An automated data analysis framework can extract bulk and sidewall current contributions for various subsets of the test device grid. With a suitable periodic arrangement of test diode sizes, the spatial distribution of the individual contributions can thus be investigated. We found a relatively homogeneous distribution of both bulk dark current density and sidewall current contribution across the sample. With the help of an improved capacitance-voltage measurement setup developed to complement this technique a residual carrier concentration of 1.3×1015 cm-3 is obtained. The work is motivated by research into high performance superlattice array sensors with demanding processing requirements. A novel long-wavelength infrared imager based on a heterojunction concept is presented as an example for this work. It achieves a noise equivalent temperature difference below 30 mK for realistic operating conditions.
A practical implementation of high resolution relative spectral response measurement of CMOS IRFPAs using Fourier Transform Infrared Spectrometer (FTIR)
Catherine Barrat, Thierry Lepot, Michael Ramamonjisoa, et al.
The accurate knowledge of IR detectors specifications becomes of higher importance whatever the application. Among these specifications is the relative spectral response. The usual method of relative spectral response measurement uses a source spectrally defined by the wavelength selection through a grating-based monochromator. This simple and proven method has a limited spectral resolution since the signal received by the tested detector is proportional to the width of the wavelength selection slit i.e. the spectral resolution. Another method consists in using a Fourier Transform IR Spectrometer (FTIR) easily allowing a 1 cm-1 spectral resolution even in the Long Wave IR range. However, the implementation of this method requires a meticulous analysis of all the elements of the bench and all the parameters to avoid any misinterpretation of the results. Among the potential traps are the frequency dependence of the signals and the parasitic fringes effect on the curves. Practical methods to correct the frequency dependence of the reference detector and to remove parasitic interference fringes are presented in this paper.
Ultracompact plasmonic sensor with graphene-based silicon reflector
Jicheng Wang, Xiaosai Wang, Ci Song, et al.
In this paper, we investigate a monolayer graphene placed on a doped-silicon grating numerically and studied the dependence of its transmission spectra on the geometrical parameters of the grating. A stop-band with great tunability in the mid-infrared region of the transmission spectra are obtained in a much more compact structure size compared to a traditional fiber Bragg grating (FBG). In addition, by inserting a defect into the center of the structure, we introduce a phase shift of π phase shift into the field, leading to an open window in the stop-band transmission spectra. With the good tunability and compact size, our proposed structure can be utilized as graphene-based ultra-compact and highly sensitive plasmonic senors for potential applications.
Posters--Wednesday
icon_mobile_dropdown
Space based optical staring sensor LOS determination and calibration using GCPs observation
Jun Chen, Wei An, Xinpu Deng, et al.
Line of sight (LOS) attitude determination and calibration is the key prerequisite of tracking and location of targets in space based infrared (IR) surveillance systems (SBIRS) and the LOS determination and calibration of staring sensor is one of the difficulties. This paper provides a novel methodology for removing staring sensor bias through the use of Ground Control Points (GCPs) detected in the background field of the sensor. Based on researching the imaging model and characteristics of the staring sensor of SBIRS geostationary earth orbit part (GEO), the real time LOS attitude determination and calibration algorithm using landmark control point is proposed. The influential factors (including the thermal distortions error, assemble error, and so on) of staring sensor LOS attitude error are equivalent to bias angle of LOS attitude. By establishing the observation equation of GCPs and the state transition equation of bias angle, and using an extend Kalman filter (EKF), the real time estimation of bias angle and the high precision sensor LOS attitude determination and calibration are achieved. The simulation results show that the precision and timeliness of the proposed algorithm meet the request of target tracking and location process in space based infrared surveillance system.
Active manipulating propagation in the graphene hybrid plasmonic waveguides in mid-infrared region
A novel hybrid plasmonic waveguide of the graphene-coated V-groove and waveguide structure is proposed. The subwavelength confinements and the propagation of the graphene surface plasmon polaritons modes of the hybrid graphene-coated waveguide are reached. The mode field energies can be well confined in the V-groove or the waveguide and be adjusted by varying the chemical potential of graphene. The mode confinement becomes weaker and the propagation length gets longer as the chemical potential of grapheme increasing. In addition, adjusting the radius of the waveguide and the frequencies could change the mode propagation and the higher mode is achieved. The finite element method (FEM) has been employed to study the mode distributions and electromagnetic responses of our designs at midinfrared frequencies.
Analysis of the variation of range parameters of thermal cameras
Jarosław Bareła, Mariusz Kastek, Krzysztof Firmanty, et al.
Measured range characteristics may vary considerably (up to several dozen percent) between different samples of the same camera type. The question is whether the manufacturing process somehow lacks repeatability or the commonly used measurement procedures themselves need improvement. The presented paper attempts to deal with the aforementioned question. The measurement method has been thoroughly analyzed as well as the measurement test bed. Camera components (such as detector and optics) have also been analyzed and their key parameters have been measured, including noise figures of the entire system. Laboratory measurements are the most precise method used to determine range parameters of a thermal camera. However, in order to obtain reliable results several important conditions have to be fulfilled. One must have the test equipment capable of measurement accuracy (uncertainty) significantly better than the magnitudes of measured quantities. The measurements must be performed in a controlled environment thus excluding the influence of varying environmental conditions. The personnel must be well-trained, experienced in testing the thermal imaging devices and familiar with the applied measurement procedures. The measurement data recorded for several dozen of cooled thermal cameras (from one of leading camera manufacturers) have been the basis of the presented analysis. The measurements were conducted in the accredited research laboratory of Institute of Optoelectronics (Military University of Technology).
Test stand for determining parameters of microbolometer camera
Michał Krupiński, Jarosław Bareła, Mariusz Kastek, et al.
In order to objectively compare the two infrared cameras ones must to measure and compare their parameters on a laboratory. One of the basic parameters for the evaluation of the designed camera is NEDT (noise equivalent delta temperature). In order to examine the NEDT ,parameters such as sensitivity and pixels noise must be measured. To do so, ones should register the output signal from the camera in response to the radiation of black bodies at two different temperatures. The article presents an application and measuring stand for determining the parameters of microbolometers camera. In addition to determination of parameters of a cameras the measuring stand allow to determine defective pixel map, the non uniformity correction (NUC) coefficients: 1-point and 2-point. Additionally, developed test stand serves as a test system to read the raw data from microbolometer detector. Captured image can be corrected with calculated non-uniformity correction coefficients. In a next step the image is processed and visualized on a monitor. Developed test stand allows for an initial assessment of the quality of designed readout circuit. It also allows for efficient testing and comparison of the number of sensors or readout circuits.
A new systematic calibration method of ring laser gyroscope inertial navigation system
Guo Wei, Chunfeng Gao, Qi Wang, et al.
Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.
A three axis turntable's online initial state measurement method based on the high-accuracy laser gyro SINS
Chunfeng Gao, Guo Wei, Qi Wang, et al.
As an indispensable equipment in inertial technology tests, the three-axis turntable is widely used in the calibration of various types inertial navigation systems (INS). In order to ensure the calibration accuracy of INS, we need to accurately measure the initial state of the turntable. However, the traditional measuring method needs a lot of exterior equipment (such as level instrument, north seeker, autocollimator, etc.), and the test processing is complex, low efficiency. Therefore, it is relatively difficult for the inertial measurement equipment manufacturers to realize the self-inspection of the turntable. Owing to the high precision attitude information provided by the laser gyro strapdown inertial navigation system (SINS) after fine alignment, we can use it as the attitude reference of initial state measurement of three-axis turntable. For the principle that the fixed rotation vector increment is not affected by measuring point, we use the laser gyro INS and the encoder of the turntable to provide the attitudes of turntable mounting plat. Through this way, the high accuracy measurement of perpendicularity error and initial attitude of the three-axis turntable has been achieved.
Coherent synthetic imaging using multi-aperture scanning Fourier ptychography
Zongliang Xie, Haotong Ma, Bo Qi, et al.
The high resolution is what the synthetic aperture technique quests for. In this paper, we propose an approach of coherent synthetic imaging with sparse aperture systems using multi-aperture scanning Fourier ptychography algorithm, which can further improve the resolution of sparse aperture systems. The reported technique first acquires a series of raw images by scanning a sparse aperture system and then the captured images are used to synthesize a larger spectrum in the frequency domain using aperture-scanning Fourier ptychography algorithm. The system’s traveling circumvent its diffraction limit so that a super-resolution image can be obtained. Numerical simulation demonstrates the validity. The technique proposed in this paper may find wide applications in synthetic aperture imaging and astronomy.