Proceedings Volume 10432

Target and Background Signatures III

cover
Proceedings Volume 10432

Target and Background Signatures III

Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 17 November 2017
Contents: 6 Sessions, 25 Papers, 0 Presentations
Conference: SPIE Security + Defence 2017
Volume Number: 10432

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10432
  • Optimizing Camouflage
  • Target Signature Analysis
  • Hiding for Human Observers
  • Automated Target Recognition
  • Poster Session
Front Matter: Volume 10432
icon_mobile_dropdown
Front Matter: Volume 10432
This PDF file contains the front matter associated with SPIE Proceedings Volume 10432 including the Title Page, Copyright information, Table of Contents, Introduction (if any), and Conference Committee listing.
Optimizing Camouflage
icon_mobile_dropdown
Spectral characterization of natural backgrounds
As the distribution and use of hyperspectral sensors is constantly increasing, the exploitation of spectral features is a threat for camouflaged objects. To improve camouflage materials at first the spectral behavior of backgrounds has to be known to adjust and optimize the spectral reflectance of camouflage materials.

In an international effort, the NATO CSO working group SCI-295 "Development of Methods for Measurements and Evaluation of Natural Background EO Signatures" is developing a method how this characterization of backgrounds has to be done. It is obvious that the spectral characterization of a background will be quite an effort. To compare and exchange data internationally the measurements will have to be done in a similar way.

To test and further improve this method an international field trial has been performed in Storkow, Germany. In the following we present first impressions and lessons learned from this field campaign and describe the data that has been measured.
Angular dependance of spectral reflection for different materials
Pascal M. Kiefer
Parameters like the sun angle as well as the measurement angle mostly are not taken into account when simulating because their influence on the reflectivity is weak. Therefore the impact of a changing measurement and illumination angle on the reflectivity is investigated. Furthermore the impact of humidity and chlorophyll in the scenery is studied by analyzing reflectance spectra of different vegetative background areas. It is shown that the measurement as well as the illumination angle has an important influence on the absolute reflection values which raises the importance of measurements of the bidirectional reflectance distribution function (BRDF).
Optical polarization: background and camouflage
Christina Åkerlind, Tomas Hallberg, Johan Eriksson, et al.
Polarimetric imaging sensors in the electro-optical region, already military and commercially available in both the visual and infrared, show enhanced capabilities for advanced target detection and recognition. The capabilities arise due to the ability to discriminate between man-made and natural background surfaces using the polarization information of light. In the development of materials for signature management in the visible and infrared wavelength regions, different criteria need to be met to fulfil the requirements for a good camouflage against modern sensors. In conventional camouflage design, the aimed design of the surface properties of an object is to spectrally match or adapt it to a background and thereby minimizing the contrast given by a specific threat sensor. Examples will be shown from measurements of some relevant materials and how they in different ways affect the polarimetric signature. Dimensioning properties relevant in an optical camouflage from a polarimetric perspective, such as degree of polarization, the viewing or incident angle, and amount of diffuse reflection, mainly in the infrared region, will be discussed.
Selected issues connected with determination of requirements of spectral properties of camouflage patterns
František Racek, Adam Jobánek, Teodor Baláž, et al.
Traditionally spectral reflectance of the material is measured and compared with permitted spectral reflectance boundaries. The boundaries are limited by upper and lower curve of spectral reflectance. The boundaries for unique color has to fulfil the operational requirements as a versatility of utilization through the all year seasons, day and weather condition on one hand and chromatic and spectral matching with background as well as the manufacturability on the other hand. The interval between the boundaries suffers with ambivalent feature. Camouflage pattern producer would be happy to see it much wider, but blending effect into its particular background could be better with narrower tolerance limits. From the point of view of long time user of camouflage pattern battledress, there seems to be another ambivalent feature. Width of the tolerance zone reflecting natural dispersion of spectral reflectance values allows the significant distortions of shape of the spectral curve inside the given boundaries.
Evaluation of camouflage pattern performance of textiles by human observers and CAMAELEON
Military textiles with camouflage pattern are an important part of the protection measures for soldiers. Military operational environments differ a lot depending on climate and vegetation. This requires very different camouflage pattern to achieve good protection. To find the best performing pattern for given environments we have in earlier evaluations mainly applied observer trials as evaluation method. In these camouflage evaluation test human observers were asked to search for targets (in natural settings) presented on a high resolution PC screen, and the corresponding detection times were recorded. Another possibility is to base the evaluation on simulations. CAMAELEON is a licensed tool that ranks camouflaged targets by their similarity with local backgrounds. The similarity is estimated through the parameters local contrast, orientation of structures in the pattern and spatial frequency, by mimicking the response and signal processing in the visual cortex of the human eye. Simulations have a number of advantages over observer trials, for example, that they are more flexible, cheaper, and faster. Applying these two methods to the same images of camouflaged targets we found that CAMAELEON simulation results didn’t match observer trial results for targets with disruptive patterns. This finding now calls for follow up studies in order to learn more about the advantages and pitfalls of CAMAELEON. During recent observer trials we studied new camouflage patterns and the effect of additional equipment, such as combat vests. In this paper we will present the results from a study comparing evaluation results of human based observer trials and CAMAELEON.
Hyperspectral discrimination of camouflaged target
The article deals with detection of camouflaged objects during winter season. Winter camouflage is a marginal affair in most countries due to short time period of the snow cover. In the geographical condition of Central Europe the winter period with snow occurs less than 1/12 of year. The LWIR or SWIR spectral areas are used for detection of camouflaged objects. For those spectral regions the difference in chemical composition and temperature express in spectral features. Exploitation of the LWIR and SWIR devices is demanding due to their large dimension and expensiveness. Therefore, the article deals with estimation of utilization of VIS region for detecting of camouflaged object on snow background. The multispectral image output for the various spectral filters is simulated. Hyperspectral indices are determined to detect the camouflaged objects in the winter. The multispectral image simulation is based on the hyperspectral datacube obtained in real conditions.
Target Signature Analysis
icon_mobile_dropdown
Hyperspectral target detection analysis of a cluttered scene from a virtual airborne sensor platform using MuSES
Corey D. Packard, Timothy S. Viola, Mark D. Klein
The ability to predict spectral electro-optical (EO) signatures for various targets against realistic, cluttered backgrounds is paramount for rigorous signature evaluation. Knowledge of background and target signatures, including plumes, is essential for a variety of scientific and defense-related applications including contrast analysis, camouflage development, automatic target recognition (ATR) algorithm development and scene material classification. The capability to simulate any desired mission scenario with forecast or historical weather is a tremendous asset for defense agencies, serving as a complement to (or substitute for) target and background signature measurement campaigns. In this paper, a systematic process for the physical temperature and visible-through-infrared radiance prediction of several diverse targets in a cluttered natural environment scene is presented. The ability of a virtual airborne sensor platform to detect and differentiate targets from a cluttered background, from a variety of sensor perspectives and across numerous wavelengths in differing atmospheric conditions, is considered. The process described utilizes the thermal and radiance simulation software MuSES and provides a repeatable, accurate approach for analyzing wavelength-dependent background and target (including plume) signatures in multiple band-integrated wavebands (multispectral) or hyperspectrally. The engineering workflow required to combine 3D geometric descriptions, thermal material properties, natural weather boundary conditions, all modes of heat transfer and spectral surface properties is summarized. This procedure includes geometric scene creation, material and optical property attribution, and transient physical temperature prediction. Radiance renderings, based on ray-tracing and the Sandford-Robertson BRDF model, are coupled with MODTRAN for the inclusion of atmospheric effects. This virtual hyperspectral/multispectral radiance prediction methodology has been extensively validated and provides a flexible process for signature evaluation and algorithm development.
Hyperheat: a thermal signature model for super- and hypersonic missiles
S. A. van Binsbergen, B. van Zelderen, R. G. Veraar, et al.
In performance prediction of IR sensor systems for missile detection, apart from the sensor specifications, target signatures are essential variables. Very often, for velocities up to Mach 2-2.5, a simple model based on the aerodynamic heating of a perfect gas was used to calculate the temperatures of missile targets. This typically results in an overestimate of the target temperature with correspondingly large infrared signatures and detection ranges. Especially for even higher velocities, this approach is no longer accurate. Alternatives like CFD calculations typically require more complex sets of inputs and significantly more computing power.

The MATLAB code Hyperheat was developed to calculate the time-resolved skin temperature of axisymmetric high speed missiles during flight, taking into account the behaviour of non-perfect gas and proper heat transfer to the missile surface. Allowing for variations in parameters like missile shape, altitude, atmospheric profile, angle of attack, flight duration and super- and hypersonic velocities up to Mach 30 enables more accurate calculations of the actual target temperature. The model calculates a map of the skin temperature of the missile, which is updated over the flight time of the missile. The sets of skin temperature maps are calculated within minutes, even for >100 km trajectories, and can be easily converted in thermal infrared signatures for further processing.

This paper discusses the approach taken in Hyperheat. Then, the thermal signature of a set of typical missile threats is calculated using both the simple aerodynamic heating model and the Hyperheat code. The respective infrared signatures are compared, as well as the difference in the corresponding calculated detection ranges.
Infrared measurements of launch vehicle exhaust plumes
Caroline Schweitzer, Phillip Ohmer, Norbert Wendelstein, et al.
In the fields of early warning, one is depending on reliable analytical models for the prediction of the infrared threat signature: By having this as a basis, the warning sensors can be specified as suitable as possible to give timely threat approach alerts.

In this paper, we will present preliminary results of measurement trials that have been carried out in 2015, where the exhaust plume of launch vehicles has been measured under various atmospheric conditions. The gathered data will be used to validate analytical models for the prediction of the plume signature.
Simulation of an oil film at the sea surface and its radiometric properties in the SWIR
The knowledge of the optical contrast of an oil layer on the sea under various surface roughness conditions is of great interest for oil slick monitoring techniques. This paper presents a 3D simulation of a dynamic sea surface contaminated by a floating oil film. The simulation considers the damping influence of oil on the ocean waves and its physical properties. It calculates the radiance contrast of the sea surface polluted by the oil film in relation to a clean sea surface for the SWIR spectral band. Our computer simulation combines the 3D simulation of a maritime scene (open clear sea/clear sky) with an oil film at the sea surface. The basic geometry of a clean sea surface is modeled by a composition of smooth wind driven gravity waves. Oil on the sea surface attenuates the capillary and short gravity waves modulating the wave power density spectrum of these waves. The radiance of the maritime scene is calculated in the SWIR spectral band with the emitted sea surface radiance and the specularly reflected sky radiance as components. Wave hiding and shadowing, especially occurring at low viewing angles, are considered. The specular reflection of the sky radiance at the clean sea surface is modeled by an analytical statistical bidirectional reflectance distribution function (BRDF) of the sea surface. For oil at the sea surface, a specific BRDF is used influenced by the reduced surface roughness, i.e., the modulated wave density spectrum. The radiance contrast of an oil film in relation to the clean sea surface is calculated for different viewing angles, wind speeds, and oil types characterized by their specific physical properties.
Hiding for Human Observers
icon_mobile_dropdown
Examination of soldier target recognition with direct view optics
Frederick H. Long, Gabriella Larkin, Danielle Bisordi, et al.
Target recognition and identification is a problem of great military and scientific importance. To examine the correlation between target recognition and optical magnification, ten U.S. Army soldiers were tasked with identifying letters on targets at 800 and 1300 meters away. Letters were used since they are a standard method for measuring visual acuity. The letters were approximately 90 cm high, which is the size of a well-known rifle. Four direct view optics with angular magnifications of 1.5x, 4x, 6x, and 9x were used. The goal of this approach was to measure actual probabilities for correct target identification. Previous scientific literature suggests that target recognition can be modeled as a linear response problem in angular frequency space using the established values for the contrast sensitivity function for a healthy human eye and the experimentally measured modulation transfer function of the optic. At the 9x magnification, the soldiers could identify the letters with almost no errors (i.e., 97% probability of correct identification). At lower magnification, errors in letter identification were more frequent. The identification errors were not random but occurred most frequently with a few pairs of letters (e.g., O and Q), which is consistent with the literature for letter recognition. In addition, in the small subject sample of ten soldiers, there was considerable variation in the observer recognition capability at 1.5x and a range of 800 meters. This can be directly attributed to the variation in the observer visual acuity.
Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task
Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.
Mirage: a visible signature evaluation tool
Joanne B. Culpepper, Alaster J. Meehan, Q. T. Shao, et al.
This paper presents the Mirage visible signature evaluation tool, designed to provide a visible signature evaluation capability that will appropriately reflect the effect of scene content on the detectability of targets, providing a capability to assess visible signatures in the context of the environment. Mirage is based on a parametric evaluation of input images, assessing the value of a range of image metrics and combining them using the boosted decision tree machine learning method to produce target detectability estimates. It has been developed using experimental data from photosimulation experiments, where human observers search for vehicle targets in a variety of digital images. The images used for tool development are synthetic (computer generated) images, showing vehicles in many different scenes and exhibiting a wide variation in scene content. A preliminary validation has been performed using k-fold cross validation, where 90% of the image data set was used for training and 10% of the image data set was used for testing. The results of the k-fold validation from 200 independent tests show a prediction accuracy between Mirage predictions of detection probability and observed probability of detection of r(262) = 0:63, p < 0:0001 (Pearson correlation) and a MAE = 0:21 (mean absolute error).
Comparing synthetic imagery with real imagery for visible signature analysis: human observer results
Joanne B. Culpepper, Noel Richards, Christopher S. Madden, et al.
Synthetic imagery could potentially enhance visible signature analysis by providing a wider range of target images in differing environmental conditions than would be feasible to collect in field trials. Achieving this requires a method for generating synthetic imagery that is both verified to be realistic and produces the same visible signature analysis results as real images. Is target detectability as measured by image metrics the same for real images and synthetic images of the same scene? Is target detectability as measured by human observer trials the same for real images and synthetic images of the same scene, and how realistic do the synthetic images need to be?

In this paper we present the results of a small scale exploratory study on the second question: a photosimulation experiment conducted using digital photographs and synthetic images generated of the same scene. Two sets of synthetic images were created: a high fidelity set created using an image generation tool, E-on Vue, and a low fidelity set created using a gaming engine, Unity 3D. The target detection results obtained using digital photographs were compared with those obtained using the two sets of synthetic images. There was a moderate correlation between the high fidelity synthetic image set and the real images in both the probability of correct detection (Pd: PCC = 0.58, SCC = 0.57) and mean search time (MST: PCC = 0.63, SCC = 0.61). There was no correlation between the low fidelity synthetic image set and the real images for the Pd, but a moderate correlation for MST (PCC = 0.67, SCC = 0.55).
Target acquisition modeling over the exact optical path: extending the EOSTAR TDA with the TOD sensor performance model
J. Dijk, P. Bijl, M. Oppeneer, et al.
The Electro-Optical Signal Transmission and Ranging (EOSTAR) model is an image-based Tactical Decision Aid (TDA) for thermal imaging systems (MWIR/LWIR) developed for a sea environment with an extensive atmosphere model. The Triangle Orientation Discrimination (TOD) Target Acquisition model calculates the sensor and signal processing effects on a set of input triangle test pattern images, judges their orientation using humans or a Human Visual System (HVS) model and derives the system image quality and operational field performance from the correctness of the responses. Combination of the TOD model and EOSTAR, basically provides the possibility to model Target Acquisition (TA) performance over the exact path from scene to observer. In this method ship representative TOD test patterns are placed at the position of the real target, subsequently the combined effects of the environment (atmosphere, background, etc.), sensor and signal processing on the image are calculated using EOSTAR and finally the results are judged by humans. The thresholds are converted into Detection-Recognition-Identification (DRI) ranges of the real target. In experiments is shown that combination of the TOD model and the EOSTAR model is indeed possible. The resulting images look natural and provide insight in the possibilities of combining the two models. The TOD observation task can be done well by humans, and the measured TOD is consistent with analytical TOD predictions for the same camera that was modeled in the ECOMOS project.
Automated Target Recognition
icon_mobile_dropdown
Automatic target recognition and detection in infrared imagery under cluttered background
Erhan Gundogdu, Aykut Koç, A. Aydın Alatan
Visual object classification has long been studied in visible spectrum by utilizing conventional cameras. Since the labeled images has recently increased in number, it is possible to train deep Convolutional Neural Networks (CNN) with significant amount of parameters. As the infrared (IR) sensor technology has been improved during the last two decades, labeled images extracted from IR sensors have been started to be used for object detection and recognition tasks. We address the problem of infrared object recognition and detection by exploiting 15K images from the real-field with long-wave and mid-wave IR sensors. For feature learning, a stacked denoising autoencoder is trained in this IR dataset. To recognize the objects, the trained stacked denoising autoencoder is fine-tuned according to the binary classification loss of the target object. Once the training is completed, the test samples are propagated over the network, and the probability of the test sample belonging to a class is computed. Moreover, the trained classifier is utilized in a detect-by-classification method, where the classification is performed in a set of candidate object boxes and the maximum confidence score in a particular location is accepted as the score of the detected object. To decrease the computational complexity, the detection step at every frame is avoided by running an efficient correlation filter based tracker. The detection part is performed when the tracker confidence is below a pre-defined threshold. The experiments conducted on the real field images demonstrate that the proposed detection and tracking framework presents satisfactory results for detecting tanks under cluttered background.
Video change detection for fixed wing UAVs
Jan Bartelsen, Thomas Müller, Jochen Ring, et al.
In this paper we proceed the work of Bartelsen et al.1 We present the draft of a process chain for an image based change detection which is designed for videos acquired by fixed wing unmanned aerial vehicles (UAVs). From our point of view, automatic video change detection for aerial images can be useful to recognize functional activities which are typically caused by the deployment of improvised explosive devices (IEDs), e.g. excavations, skid marks, footprints, left-behind tooling equipment, and marker stones. Furthermore, in case of natural disasters, like flooding, imminent danger can be recognized quickly. Due to the necessary flight range, we concentrate on fixed wing UAVs. Automatic change detection can be reduced to a comparatively simple photogrammetric problem when the perspective change between the "before" and "after" image sets is kept as small as possible. Therefore, the aerial image acquisition demands a mission planning with a clear purpose including flight path and sensor configuration. While the latter can be enabled simply by a fixed and meaningful adjustment of the camera, ensuring a small perspective change for "before" and "after" videos acquired by fixed wing UAVs is a challenging problem. Concerning this matter, we have performed tests with an advanced commercial off the shelf (COTS) system which comprises a differential GPS and autopilot system estimating the repetition accuracy of its trajectory. Although several similar approaches have been presented,23 as far as we are able to judge, the limits for this important issue are not estimated so far. Furthermore, we design a process chain to enable the practical utilization of video change detection. It consists of a front-end of a database to handle large amounts of video data, an image processing and change detection implementation, and the visualization of the results. We apply our process chain on the real video data acquired by the advanced COTS fixed wing UAV and synthetic data. For the image processing and change detection, we use the approach of Muller.4 Although it was developed for unmanned ground vehicles (UGVs), it enables a near real time video change detection for aerial videos. Concluding, we discuss the demands on sensor systems in the matter of change detection.
Automatic visibility retrieval from thermal camera images
Céline Dizerens, Beat Ott, Peter Wellig, et al.
This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.
A comparative study on methods of improving SCR for ship detection in SAR image
Haitao Lang, Hongji Shi, Yunhong Tao, et al.
Knowledge about ship positions plays a critical role in a wide range of maritime applications. To improve the performance of ship detector in SAR image, an effective strategy is improving the signal-to-clutter ratio (SCR) before conducting detection. In this paper, we present a comparative study on methods of improving SCR, including power-law scaling (PLS), max-mean and max-median filter (MMF1 and MMF2), method of wavelet transform (TWT), traditional SPAN detector, reflection symmetric metric (RSM), scattering mechanism metric (SMM). The ability of SCR improvement to SAR image and ship detection performance associated with cell- averaging CFAR (CA-CFAR) of different methods are evaluated on two real SAR data.
SAR image dataset of military ground targets with multiple poses for ATR
Carole Belloni, Alessio Balleri, Nabil Aouf, et al.
Automatic Target Recognition (ATR) is the task of automatically detecting and classifying targets. Recognition using Synthetic Aperture Radar (SAR) images is interesting because SAR images can be acquired at night and under any weather conditions, whereas optical sensors operating in the visible band do not have this capability. Existing SAR ATR algorithms have mostly been evaluated using the MSTAR dataset.1 The problem with the MSTAR is that some of the proposed ATR methods have shown good classification performance even when targets were hidden,2 suggesting the presence of a bias in the dataset. Evaluations of SAR ATR techniques are currently challenging due to the lack of publicly available data in the SAR domain. In this paper, we present a high resolution SAR dataset consisting of images of a set of ground military target models taken at various aspect angles, The dataset can be used for a fair evaluation and comparison of SAR ATR algorithms. We applied the Inverse Synthetic Aperture Radar (ISAR) technique to echoes from targets rotating on a turntable and illuminated with a stepped frequency waveform. The targets in the database consist of four variants of two 1.7m-long models of T-64 and T-72 tanks. The gun, the turret position and the depression angle are varied to form 26 different sequences of images. The emitted signal spanned the frequency range from 13 GHz to 18 GHz to achieve a bandwidth of 5 GHz sampled with 4001 frequency points. The resolution obtained with respect to the size of the model targets is comparable to typical values obtained using SAR airborne systems. Single polarized images (Horizontal-Horizontal) are generated using the backprojection algorithm.3 A total of 1480 images are produced using a 20° integration angle. The images in the dataset are organized in a suggested training and testing set to facilitate a standard evaluation of SAR ATR algorithms.
Automatic x-ray image segmentation and clustering for threat detection
Odysseas Kechagias-Stamatis, Nabil Aouf, David Nam, et al.
Firearms currently pose a known risk at the borders. The enormous number of X-ray images from parcels, luggage and freight coming into each country via rail, aviation and maritime presents a continual challenge to screening officers. To further improve UK capability and aid officers in their search for firearms we suggest an automated object segmentation and clustering architecture to focus officers’ attentions to high-risk threat objects. Our proposal utilizes dual-view single/ dual-energy 2D X-ray imagery and is a blend of radiology, image processing and computer vision concepts. It consists of a triple-layered processing scheme that supports segmenting the luggage contents based on the effective atomic number of each object, which is then followed by a dual-layered clustering procedure. The latter comprises of mild and a hard clustering phase. The former is based on a number of morphological operations obtained from the image-processing domain and aims at disjoining mild-connected objects and to filter noise. The hard clustering phase exploits local feature matching techniques obtained from the computer vision domain, aiming at sub-clustering the clusters obtained from the mild clustering stage. Evaluation on highly challenging single and dual-energy X-ray imagery reveals the architecture’s promising performance.
Poster Session
icon_mobile_dropdown
Small target detection using objectness and saliency
We are motived by the need for generic object detection algorithm which achieves high recall for small targets in complex scenes with acceptable computational efficiency. We propose a novel object detection algorithm, which has high localization quality with acceptable computational cost. Firstly, we obtain the objectness map as in BING[1] and use NMS to get the top N points. Then, k-means algorithm is used to cluster them into K classes according to their location. We set the center points of the K classes as seed points. For each seed point, an object potential region is extracted. Finally, a fast salient object detection algorithm[2] is applied to the object potential regions to highlight objectlike pixels, and a series of efficient post-processing operations are proposed to locate the targets. Our method runs at 5 FPS on 1000*1000 images, and significantly outperforms previous methods on small targets in cluttered background.
An object detection and tracking system for unmanned surface vehicles
Object detection and tracking are critical parts of unmanned surface vehicles(USV) to achieve automatic obstacle avoidance. Off-the-shelf object detection methods have achieved impressive accuracy in public datasets, though they still meet bottlenecks in practice, such as high time consumption and low detection quality. In this paper, we propose a novel system for USV, which is able to locate the object more accurately while being fast and stable simultaneously. Firstly, we employ Faster R-CNN to acquire several initial raw bounding boxes. Secondly, the image is segmented to a few superpixels. For each initial box, the superpixels inside will be grouped into a whole according to a combination strategy, and a new box is thereafter generated as the circumscribed bounding box of the final superpixel. Thirdly, we utilize KCF to track these objects after several frames, Faster-RCNN is again used to re-detect objects inside tracked boxes to prevent tracking failure as well as remove empty boxes. Finally, we utilize Faster R-CNN to detect objects in the next image, and refine object boxes by repeating the second module of our system. The experimental results demonstrate that our system is fast, robust and accurate, which can be applied to USV in practice.
Airport object extraction based on visual attention mechanism and parallel line detection
Target extraction is one of the important aspects in remote sensing image analysis and processing, which has wide applications in images compression, target tracking, target recognition and change detection. Among different targets, airport has attracted more and more attention due to its significance in military and civilian. In this paper, we propose a novel and reliable airport object extraction model combining visual attention mechanism and parallel line detection algorithm. First, a novel saliency analysis model for remote sensing images with airport region is proposed to complete statistical saliency feature analysis. The proposed model can precisely extract the most salient region and preferably suppress the background interference. Then, the prior geometric knowledge is analyzed and airport runways contained two parallel lines with similar length are detected efficiently. Finally, we use the improved Otsu threshold segmentation method to segment and extract the airport regions from the salient map of remote sensing images. The experimental results demonstrate that the proposed model outperforms existing saliency analysis models and shows good performance in the detection of the airport.