Proceedings Volume 5439

Independent Component Analyses, Wavelets, Unsupervised Smart Sensors, and Neural Networks II

Harold H. Szu, Mladen V. Wickerhauser, Barak A. Pearlmutter, et al.
cover
Proceedings Volume 5439

Independent Component Analyses, Wavelets, Unsupervised Smart Sensors, and Neural Networks II

Harold H. Szu, Mladen V. Wickerhauser, Barak A. Pearlmutter, et al.
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 April 2004
Contents: 9 Sessions, 27 Papers, 0 Presentations
Conference: Defense and Security 2004
Volume Number: 5439

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Unsupervised Learning ICA Pioneer Award
  • Wavelet Pioneer Award
  • Wavelets Application
  • ICA: Invited Session
  • Biomedical Applications: Invited Session
  • Remote Sensing: Invited Session
  • Pattern Recognition
  • Hybrid Signal Applications
  • Poster Session
  • Pattern Recognition
Unsupervised Learning ICA Pioneer Award
icon_mobile_dropdown
Blind source separation: neural net principles and applications
Blind source separation (BSS) is a computational technique for revealing hidden factors that underlie sets of measurements or signals. The most basic statistical approach to BSS is Independent Component Analysis (ICA). It assumes a statistical model whereby the observed multivariate data are assumed to be linear or nonlinear mixtures of some unknown latent variables with nongaussian probability densities. The mixing coefficients are also unknown. By ICA, these latent variables can be found. This article gives the basics of linear ICA and relates the problem and the solution algorithms to neural learning rules, which can be seen as extensions of some classical Principal Component Analysis learning rules. Also the more efficient FastICA algorithm is briefly reviewed. Finally, the paper lists recent applications of BSS and ICA on a variety of problem domains.
Wavelet Pioneer Award
icon_mobile_dropdown
Multiresolution transforms in modern image and video coding systems
This paper presents a brief overview of the multiresolution transform designs used in a few image and video compression systems, namely H.264, PTC (progressive transform coder), and JPEG2000. The first two use hierarchical transforms, and the third uses wavelet transforms. We review the basis constructions for the hierarchical transforms, and compare some of their characteristics with those of wavelet transforms. In terms of compression performance as measured by peak-signal to noise ratio, H.264 provides the best performance, but at much higher computational complexity. In terms of visual quality, the multiresolution transforms provide an improvement over block (single resolution) transforms.
Wavelets Application
icon_mobile_dropdown
A simple nonlinear filter for edge detection in images
Mladen Victor Wickerhauser, Wojciech Czaja
We specialize to two simple cases the algorithm for singularity detection in images from eigenvalues of the dual local autocovariance matrix. The eigenvalue difference, or "edginess" at a point, then reduces to a simple nonlinear function. We discuss the derivation of these functions, which provide low-complexity nonlinear edge filters with parameters for customization, and obtain formulas in the two simplest special cases. We also provide an implementation and exhibit its output on six sample images.
Wavelet-based associative memory
Faces provide important characteristics of a person’s identification. In security checks, face recognition still remains the method in continuous use despite other approaches (i.e. fingerprints, voice recognition, pupil contraction, DNA scanners). With an associative memory, the output data is recalled directly using the input data. This can be achieved with a Nonlinear Holographic Associative Memory (NHAM). This approach can also distinguish between strongly correlated images and images that are partially or totally enclosed by others. Adaptive wavelet lifting has been used for Content-Based Image Retrieval. In this paper, adaptive wavelet lifting will be applied to face recognition to achieve an associative memory.
Nonlinear noise suppression using a parametric class of wavelet shrinkage functions
Donoho developed nonlinear techniques known as wavelet shrinkage. They have since been successfully applied for noise suppression. This paper introduces a new parametric shrinkage technique and compares its performance to the soft threshold introduced by Donoho and the differentiable shrinkage function introduced by Zhang. Termed the polynomial hard threshold this new shrinkage technique is better able to represent polynomial behavior than the previous techniques. It is also able to represent a wider class of shrinkage functions making it ideal for use in adaptive noise suppression. This class of shrinkage functions includes both Donoho’s soft and the classical hard threshold. By using a priori knowledge to adjust its parameters this threshold can be tailored to perform well for a particular signal type.
Wavelets and curvelets in denoising and pattern detection tasks crucial for homeland security
Bedros Afeyan, Kirk Won, Scott E. Bisson, et al.
The design and successful fielding of sensors and detectors vital for homeland security can benefit greatly by the use of advanced signal and image processing techniques. The intent is to extract as much reliable information as possible despite noisy and hostile environments where the signals and images are gathered. In addition, the ability to perform fast analysis and response necessitate significant compression of the raw data so that they may be efficiently transmitted, remotely accumulated from different sources, and processed. Proper decompositions into compact representations allow fast pattern detection and pattern matching in real time, in situ or otherwise. Wavelets for signals and curvelets for images or hyperspectral data promise to be of paramount utility in the implementation of these goals. Together with statistical modeling and iterative thresholding techniques, wavelets, curvelets and multiresolution analysis can alleviate the severity of the requirements which today’s hardware designs can not meet in order to measure trace levels of toxins and hazardous substances. Photonic or electrooptic sensor and detector designs of the future, for example, must take into account the end game strategies made available by advanced signal and image processing techniques. The promise is the successful operation at lower signal to noise ratios, with less data mass and with deeper statistical inferences made possible than with boxcar or running averaging techniques (low pass filtering) much too commonly used to deal with noisy data at present. SPREE diagrams (spectroscopic peak reconstruction error estimation) are introduced in this paper to facilitate the decision of which wavelet filter and which denoising scheme to use with a given noisy data set.
ICA: Invited Session
icon_mobile_dropdown
Image sharpening using image sequence and independent component analysis
The novel approach to the image sharpening problem is proposed in this paper. It is based on the application of the independent component analysis (ICA) algorithm on the image sequence with the appropriate time displacement between the image frames. The novelty is in the data representation required by the ICA algorithms where each selected image frame has been used as a sensor implying that underlying sources are temporally independent. The proposed concept enables blurring effects contributed by atmospheric turbulence to be extracted as separate physical sources. It has been ensured through images registration technique that motion of the video recorder is compensated. Encouraging preliminary results were obtained when ICA algorithm has been applied on the experimental data (video sequence) with the known ground truth. It has been verified that extracted spatial turbulence patterns are highly impulsive with Gaussian exponent between 0.5 and 0.6 where Laplacian distribution is characterized with Gaussian exponent 1.
Parallel ICA and its hardware implementation in hyperspectral image analysis
Hongtao Du, Hairong Qi, Gregory D. Peterson
Advances in hyperspectral images have dramatically boosted remote sensing applications by providing abundant information using hundreds of contiguous spectral bands. However, the high volume of information also results in excessive computation burden. Since most materials have specific characteristics only at certain bands, a lot of these information is redundant. This property of hyperspectral images has motivated many researchers to study various dimensionality reduction algorithms, including Projection Pursuit (PP), Principal Component Analysis (PCA), wavelet transform, and Independent Component Analysis (ICA), where ICA is one of the most popular techniques. It searches for a linear or nonlinear transformation which minimizes the statistical dependence between spectral bands. Through this process, ICA can eliminate superfluous but retain practical information given only the observations of hyperspectral images. One hurdle of applying ICA in hyperspectral image (HSI) analysis, however, is its long computation time, especially for high volume hyperspectral data sets. Even the most efficient method, FastICA, is a very time-consuming process. In this paper, we present a parallel ICA (pICA) algorithm derived from FastICA. During the unmixing process, pICA divides the estimation of weight matrix into sub-processes which can be conducted in parallel on multiple processors. The decorrelation process is decomposed into the internal decorrelation and the external decorrelation, which perform weight vector decorrelations within individual processors and between cooperative processors, respectively. In order to further improve the performance of pICA, we seek hardware solutions in the implementation of pICA. Until now, there are very few hardware designs for ICA-related processes due to the complicated and iterant computation. This paper discusses capacity limitation of FPGA implementations for pICA in HSI analysis. A synthesis of Application-Specific Integrated Circuit (ASIC) is designed for pICA-based dimensionality reduction in HSI analysis. The pICA design is implemented using standard-height cells and aimed at TSMC 0.18 micron process. During the synthesis procedure, three ICA-related reconfigurable components are developed for the reuse and retargeting purpose. Preliminary results show that the standard-height cell based ASIC synthesis provide an effective solution for pICA and ICA-related processes in HSI analysis.
Independent component analysis for remotely sensed image classification with limited data dimensionality
The application of independent component analysis (ICA) to remotely sensed image classification has been studied recently. It is particularly useful for classifying objects with unknown spectral signatures in an unknown image scene, i.e., unsupervised classification. Since the weight matrix in ICA is a square matrix for the purpose of mathematical tractability, the number of objects that can be classified is equal to the data dimensionality, i.e., the number of spectral bands. When the number of spectral bands is very small (e.g., 3-band CIR photograph and 6-band Landsat image), it is impossible to classify all the different objects present in an image scene with the original data. In order to solve this problem, we present a data dimensionality expansion technique to generate artificial bands. Its basic idea is to use nonlinear functions to capture the second and high order correlations between original bands, which can provide additional information for detecting and classifying more objects. The results from such nonlinear band generation approach are compared with a linear band generation method using cubic spline interpolation of pixel spectral signatures. The experiments demonstrate that nonlinear band generation approach can significantly improve unsupervised classification accuracy, while linear band generation method cannot since no new information can be provided.
Biomedical Applications: Invited Session
icon_mobile_dropdown
Tree-dependent and topographic independent component analysis for fMRI analysis
Exploratory data-driven methods such as unsupervised clustering and independent component analysis (ICA) are considered to be hypothesis-generating procedures, and are complementary to the hypothesis-led statistical inferential methods in functional magnetic resonance imaging (fMRI). Recently, a new paradigm in ICA emerged, that of finding “clusters” of dependent components. This striking philosophy found its implementation in two new ICA algorithms: tree-dependent and topographic ICA. For fMRI, this represents the unifying paradigm of combining two powerful exploratory data analysis methods, ICA and unsupervised clustering techniques. For the fMRI data, a comparative quantitative evaluation between the two methods, tree-dependent and topographic ICA was performed. The comparative results were evaluated by (1) task-related activation maps, (2) associated time-courses and (3) ROC study. It can be seen that topographic ICA outperforms all other ICA methods including tree-dependent ICA for 8 and 9 ICs. However, for 16 ICs topographic ICA is outperformed by both FastICA and tree-dependent ICA (KGV) using an approximation of the mutual information the kernel generalized variance.
Data partitioning and independent component analysis techniques applied to fMRI
Exploratory data-driven methods such as data partitioning techniques and independent component analysis (ICA) are considered to be hypothesis-generating procedures, and are complementary to the hypothesis-led statistical inferential methods in functional magnetic resonance imaging (fMRI). In this paper, we present a comparison between data partitioning techniques and ICA in a systematic fMRI study. The comparative results were evaluated by (1) task-related activation maps and (2) associated time-courses. For the fMRI data, a comparative quantitative evaluation between the three clustering techniques, SOM, “neural gas” network, and fuzzy clustering based on deterministic annealing, and the three ICA methods, FastICA, Infomax and topographic ICA was performed. The ICA methods proved to extract features better than the clustering methods but are limited to the linear mixture assumption. The data partitioning techniques outperform ICA in terms of classification results but requires a longer processing time than the ICA methods.
Early breast tumor and late SARS detections using space-variant multispectral infrared imaging at a single pixel
Harold H. Szu, James R. Buss, Ivica Kopriva
We proposed the physics approach to solve a physical inverse problem, namely to choose the unique equilibrium solution (at the minimum free energy: H= E - ToS, including the Wiener, l.m.s E, and ICA, Max S, as special cases). The "unsupervised classification" presumes that required information must be learned and derived directly and solely from the data alone, in consistence with the classical Duda-Hart ATR definition of the "unlabelled data". Such truly unsupervised methodology is presented for space-variant imaging processing for a single pixel in the real world case of remote sensing, early tumor detections and SARS. The indeterminacy of the multiple solutions of the inverse problem is regulated or selected by means of the absolute minimum of isothermal free energy as the ground truth of local equilibrium condition at the single-pixel foot print.
Remote Sensing: Invited Session
icon_mobile_dropdown
Scene analysis and detection in thermal infrared remote sensing using independent component analysis
Independent Component Analysis can be used to analyze cluttered scenes from remote sensing imagery and to detect objects. We show examples in the thermal infrared spectral region (8-12 μm) using both passive hyperspectral data and active multispectral data. The examples are from actual field data and computer simulations. ICA isolates spectrally distinct objects with nearly one-to-one correspondence with the independent component basis functions, making it useful for modeling the clutter in typical scenes. We show examples of chemical plume detection in real and simulated data.
Characterization of scenarios for multiband and hyperspectral imagers
The number of imager devices using multiband or hyperspectral scenes has increased in recent years. For surveillance, or even remote sensing applications, it is necessary to reduce the amount of collected information in order to be useful for automatic or human classification tasks, with affordable performance. In this sense it is very important to filter out only redundant information still preserving the relevant information. In this paper we present an approach in order to compact this information based on a multivariate statistical analysis of spectrums that uses an automatized principal component analysis. Possible applications and use for imagers using color outputs are also given.
Pattern Recognition
icon_mobile_dropdown
Fault diagnosis in turbine engines using unsupervised neural networks technique
Kyusung Kim, Charles Ball, Emmanuel Nwadiogbu
A fault diagnosis system based on the neural networks clustering technique is developed for a mid-sized jet propulsion engine. The currently recorded data set for this engine has several limitations in its quality, which results in the lack of information required for the incipient fault detection and wide coverage of failure modes. Using the residuals of core speed, exhausted gas temperature and fuel flow, the developed system is designed to diagnose the failures related to combustor liner, bleed band, and exhausted gas temperature (EGT) sensor rake. The fault diagnosis system reports not only the machine condition but also the belief factor convincing the diagnostic decisions. In this work the actual flight data collected in the field is used to develop and validate the system, and the results are shown for the test on five engines which had experienced three different failures. The presented system is implemented in the form of web-based service and has demonstrated its robustness by isolating the failures successfully in the field.
Wavelet-cellular neural network architecture and learning algorithm
Abdullah Bal, Osman Nuri Ucan, Halit Pastaci, et al.
Cellular Neural Networks (CNN) provides fast parallel computational capability for image processing applications. The behavior of the CNN is defined by two template matrices. In this paper, adjustment of these template-matrix coefficients have been realized using supervised learning algorithm based on back-propagation technique and wavelet function. Back-propagation algorithm has been modified for dynamic behavior of CNN. Wavelet function is utilized to provide the activation function derivation in this learning algorithm. The supervised learning algorithm is then executed to obtain a compact CNN architecture, called as Wave-CNN. The proposed new learning algorithm and Wave-CNN architecture performance have been tested for 2D image processing applications.
Invariant object recognition based on the generalized discrete radon transform
Glenn R. Easley, Flavia Colonna
We introduce a method for classifying objects based on special cases of the generalized discrete Radon transform. We adjust the transform and the corresponding ridgelet transform by means of circular shifting and a singular value decomposition (SVD) to obtain a translation, rotation and scaling invariant set of feature vectors. We then use a back-propagation neural network to classify the input feature vectors. We conclude with experimental results and compare these with other invariant recognition methods.
Synthesis of electromagnetic devices with a novel neural network
Heriberto Jose Delgado, Michael H. Thursby, Fredric M. Ham
A novel Artificial Neural Network (ANN) is presented, which has been designed for computationally intensive problems, and applied to the optimization of electromagnetic devices such as antennas and microwave devices. The ANN exploits a unique number representation in conjunction with a more standard neural network architecture. An ANN consisting of hetero-associative memory provided a very efficient method of computing the necessary geometrical values for the devices, when used in conjunction with a new randomization process. The number representation used provides significant insight into this new method of fault-tolerant computing. Further work is needed to evaluate the potential of this new paradigm.
Smart time-pulse coding photoconverters as basic components 2D-array logic devices for advanced neural networks and optical computers
The article deals with a conception of building arithmetic-logic devices (ALD) with a 2D-structure and optical 2D-array inputs-outputs as advanced high-productivity parallel basic operational training modules for realization of basic operation of continuous, neuro-fuzzy, multilevel, threshold and others logics and vector-matrix, vector-tensor procedures in neural networks, that consists in use of time-pulse coding (TPC) architecture and 2D-array smart optoelectronic pulse-width (or pulse-phase) modulators (PWM or PPM) for transformation of input pictures. The input grayscale image is transformed into a group of corresponding short optical pulses or time positions of optical two-level signal swing. We consider optoelectronic implementations of universal (quasi-universal) picture element of two-valued ALD, multi-valued ALD, analog-to-digital converters, multilevel threshold discriminators and we show that 2D-array time-pulse photoconverters are the base elements for these devices. We show simulation results of the time-pulse photoconverters as base components. Considered devices have technical parameters: input optical signals power is 200nW_200μW (if photodiode responsivity is 0.5A/W), conversion time is from tens of microseconds to a millisecond, supply voltage is 1.5_15V, consumption power is from tens of microwatts to a milliwatt, conversion nonlinearity is less than 1%. One cell consists of 2-3 photodiodes and about ten CMOS transistors. This simplicity of the cells allows to carry out their integration in arrays of 32x32, 64x64 elements and more.
Multiple description coding models/multiple description sampling-based multiple classifier systems and its application to automatic target recognition
Widhyakorn Asdornwised, Somchai Jitapunkul
In this paper, we propose a new multiple classifier system (MCS) based on two concatenated stages of multiple description coding models (MDC) and multiple description sampling (MDS). This paper draws on concepts coming from a variety of disciplines that includes classical concatenated coding of error correcting codes, multiple description coding of wavelet based image compression, Adaboost and importance sampling of multiple classifier systems, and antithetic-common varaites of Monte Carlo Methods. In our previous work, we proposed and extended several methods in MDC to MCS with inspirations from two frameworks. First, we found that one of our methods is equivalent to one of the variance reduction techniques, called antithetic-common variates, in the Monte Carlo Methods (MCM). Having established that Adaboost can be interpreted as important sampling in MCM, and it can directly be interpreted as MDC, we define the term "multiple description sampling (MDS)" for Adaboost. Second, another equivalent relation between one of our methods and transmitting data over heterogeneous network, especially wireless networks, are established. One of the benefits of our approach is that it allows us to formulate a generalized class of signal processing based weak classification algorithms. This will be very applicable for MDC-MDS in high dimensional classification problems, such as image/target recognition. Performance results for automatic target recognition are presented for synthetic aperture radar (SAR) images from the MSTAR public release data set. From the experimental results, our proposed method outperform state-of-the-art multiple classifier systems, such as Adaboost and SVM-ECOC etc.
Hybrid Signal Applications
icon_mobile_dropdown
Efficient wavelet architectures using field-programmable logic and residue number system arithmetic
Javier Ramirez, Uwe Meyer-Base, Antonio Garcia
Wavelet transforms are becoming increasingly important as an image processing technology. Their efficient implementation using commercially available VLSI technology is a subject of continuous study and development. This paper presents the implementation using modern Altera APEX20K field-programmable logic (FPL) devices of reduced complexity and high performance wavelet architectures by means of the residue number system (RNS). The improvement is achieved by reducing arithmetic operations to modulo operations executed in parallel over small word-length channels. The systems are based on index arithmetic over Galois fields and the key for attaining low-complexity and high-throughput is an adequate selection of a small word-width modulus set. These systems are programmable in the sense that their coefficients can be reprogrammed in order to make them more suitable for most of the applications. FPL-efficient converters are also developed and the overhead of the input and output conversion is assessed. The design of a reduced complexity ε-CRT converter makes the conversion overhead of this kind of systems be not important for their practical implementation. The proposed structures are compared to traditional systems using 2’s complement arithmetic. With this and other innovations, the proposed architectures are about 65% faster than the 2’s complement designs and require fewer logic elements in most cases.
Nonstationary signal analysis in episodic memory retrieval
Y. G. Ku, Masashi Kawasumi, Masao Saito
The problem of blind source separation from a mixture that has nonstationarity can be seen in signal processing, speech processing, spectral analysis and so on. This study analyzed EEG signal during episodic memory retrieval using ICA and TVAR. This paper proposes a method which combines ICA and TVAR. The signal from the brain not only exhibits the nonstationary behavior, but also contain artifacts. EEG data at the frontal lobe (F3) from the scalp is collected during the episodic memory retrieval task. The method is applied to EEG data for analysis. The artifact (eye movement) is removed by ICA, and a single burst (around 6Hz) is obtained by TVAR, suggesting that the single burst is related to the brain activity during the episodic memory retrieval.
Intelligent sensor and information acquisition
Tao Mei, Xiaohua Wang, Yunjian Ge
Information chain consists of information acquisition, information processing, information transmission and information applications. Researches on information acquisition have been distributed in many disciplines since the multi-discipline property of information acquisition. The progress of information acquisition has been restricted, and information acquisition has become the bottleneck in information flow. The process and theory of information acquisition has been studied, and the discipline system of information acquisition science and technology is proposed.
Accelerating multiphysics modeling using FPGA
Multiphysics system involves the interaction of different processes, including electrical, mechanical, and chemical processes. Modeling a multiphysics system is a complicated task. A physically-based modeling technique starts with a set of governing differential equations. Analytic solution is hard to achieve, and numerical simulation generally requires intensive computation power and excessive execution time. A general purpose processor is unable to satisfy both the performance and the speed requirement. This paper presents a FPGA-based architecture that could speed up multiphysics system modeling in an order of one to two magnitudes. Hardware architectures for equations used to modeling both linear and nonlinear systems are presented, which provide an FPGA-based platform where multiple equations can perform integration simultaneously in a collaborative mode. This new methodology utilizes both parallel and pipeline mechanisms of the FPGA to accelerate complex system simulation. The performance of the FPGA-based architectures is tested using the initial value problem case studies. The implementation results show that the FPGA-based computing engine provides satisfactory computation accuracy, fast implementation speed, and affordable low cost.
Poster Session
icon_mobile_dropdown
An analysis of near-field rocket noise involving shock waves
The near-field rocket noise involving initial shock wave is harmful to human and launching equipment. The rocket noise and shock wave are inherent flow phenomena in exhausted flow field of rocket engine in its launching stage. Experiments were conducted to research the characteristics of this special flow. Pressure was measured by piezoresistive pressure transducers with high nature frequency. The shockwave data combined with rocket noise was obtained in the experiment. This paper describes several different analytical methods in study of the pressure-time history data obtained in these experiments. In time domain, several parameters such as peak overpressure, positive duration and waveform coefficient are determined from traditional methods. The frequency band of initial shock wave can be obtained by using of these parameters. But this result is quite different from result obtained frequency domain analysis method. This is attributed to the abnormal waveform of the initial shock wave. A further analysis is performed by wavelet transform. Wavelet analysis confirms the frequency domain characteristic of initial shock wave and near-field rocket noise. Several aspects of near-field and far-field rocket noise are compared on the basis of above analysis.
Study the characteristics of fire under air flow: wavelet analysis
Environments have great influence on the development and propagation of flame during fire disaster. When the fire is under the exertion of air flow, the structure of flame is changed by the air flow. The effect results of air flow depend on its flow parameters, such as velocity and flux. The influence of air flow on fire is one of the important parts in fire dynamics research. Experiment study was conducted to study the structure change of a pool fire under air flows with fixed scale but different velocities. Flame temperature was measured by fine wire thermocouples in different positions of fire. Heat flux gauge was used to measure the change of heat flux. A pair of photoelectric probe measured the fluctuation of flame. The velocity of air flow was measured and all other test data were processed by wavelet transforms. Low frequency components corresponding to stable part of fire and high frequency component with corresponding to unstable part of fire are separated by wavelet analysis. The stable threshold that disappears under a certain velocity of air flow is illustrated by the data process. These results are also compared with those obtained from short time Fourier transforms.
Pattern Recognition
icon_mobile_dropdown
Subpixel jitter video restoration on board of micro-UAV
Harold H. Szu, James R. Buss, Joseph P. Garcia, et al.
We review various image processing algorithms for micro-UAV EO/IR sub-pixel jitter restoration. Since the micro-UAV, Silver Fox, cannot afford isolation coupling mounting from the turbulent aerodynamics of the airframe, we explore smart real-time software to mitigate the sub-pixel jitter effect. We define jitter to be sub-pixel or small-amplitude vibrations up to one pixel, as opposed to motion blur over several pixels for which there already exists real time correction algorithms used on other platforms. We divide the set of jitter correction algorithms into several categories: They are real time, pseudo-real time, or non-real-time, but they are all standalone, i.e. without relying on a library storage or flight data basis on-board the UAV. The top of the list is demonstrated and reported here using real-world data and a truly unsupervised, real-time algorithm.