Proceedings Volume 6247

Independent Component Analyses, Wavelets, Unsupervised Smart Sensors, and Neural Networks IV

cover
Proceedings Volume 6247

Independent Component Analyses, Wavelets, Unsupervised Smart Sensors, and Neural Networks IV

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 17 April 2006
Contents: 9 Sessions, 37 Papers, 0 Presentations
Conference: Defense and Security Symposium 2006
Volume Number: 6247

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • 2006 Wavelet Pioneer Award
  • Wavelets and Signal Processing
  • ICA Unsupervised Learning Award
  • Real World Applications
  • Real World Applications Poster Session
  • Biomimetics
  • Neural Network Classifier
  • Invited Session Smart Firmwares
  • Smart Firmwares II
2006 Wavelet Pioneer Award
icon_mobile_dropdown
From the DFT to wavelet transforms
Mark J. T. Smith
The DFT has a long history as a powerful tool for addressing signal processing challenges. Similarly, time frequency representation like the short-time Fourier transform have been explored extensively for processing signals whose properties change with time. More recently there has been tremendous interest in applying wavelets and filter banks to signal processing problems. This paper is intended as a high level overview of filter banks and wavelets and their relationships to the traditional discrete Fourier and short-time Fourier transforms. An extensive set of references are provided to assist the interested reader in learning more about this exciting field.
Wavelets and Signal Processing
icon_mobile_dropdown
Discrete wavelet transform FPGA design using MatLab/Simulink
Uwe Meyer-Baese, A. Vera, A. Meyer-Baese, et al.
Design of current DSP applications using state-of-the art multi-million gates devices requires a broad foundation of the engineering shlls ranging from knowledge of hardware-efficient DSP algorithms to CAD design tools. The requirement of short time-to-market, however, requires to replace the traditional HDL based designs by a MatLab/Simulink based design flow. This not only allows the over 1 million MatLab users to design FPGAs but also to by-pass the hardware design engineer leading to a significant reduction in development time. Critical however with this design flow are: (1) quality-of-results, (2) sophistication of Simulink block library, (3) compile time, (4) cost and availability of development boards, and (5) cost, functionality, and ease-of-use of the FPGA vendor provided design tools.
Nonrectangular wavelets for multiresolution mesh analysis and compression
Kıvanc Köse, A. Enis Çetin, Uğur Güdükbay, et al.
We propose a new Set Partitioning In Hierarchical Trees (SPIHT) based mesh compression framework. The 3D mesh is first transformed to 2D images on a regular grid structure. Then, this image-like representation is wavelet transformed and SPIHT is applied on the wavelet domain data. The method is progressive because the resolution of the reconstructed mesh can be changed by varying the length of the 1D data stream created by SPIHT algorithm. Nearly perfect reconstruction is possible if full length of 1D data is received.
Sensor and system health management simulation
Abolfazl M. Amini
The health of a sensor and system is monitored by information gathered from the sensor. First, a normal mode of operation is established. Any deviation from the normal behavior indicates a change. Second, the sensor information is simulated by a main process, which is defined by a step-up, drift, and step-down. The sensor disturbances and spike are added while the system is in drift. The system runs for a period of at least three time-constants of the main process every time a process feature occurs (e.g. step change). The wavelet Transform Analysis is performed on three sets of data. The three sets of data are: the simulated data described above with Poisson distributed noise, real Manifold Pressure data, and real valve data. The simulated data with Poisson distributed noise of SNRs ranging from 10 to 500 were generated. Due to page limitations only the results of SNR of 50 is reported. The data are analyzed using continuous as well as discrete wavelet transforms. The results indicate distinct shapes corresponding to each process.
Cross-sensor fusion of imagery for improved information extraction
We combined cross-sensor data that leads to improved extraction of information from disparate sensors. We presented a new method for signal fusion that uses different transforms for the forward transforms of two images and a common transform for the inverse. When using a fusion rule that selects the maximum value between images, we were able to transfer more energy to the result using our method. Our method could form the basis of a new image fusion approach because it offers a way to transfer more energy to the result not possible with a conventional approach.
Implementation of an adaptive wavelet system for distortion correction in smart interferometric sensors
Interferometric sensors are effective for monitoring shape, displacement or deformation in Smart Structures. Wavelet-based ridge extraction will be applied to determine phase (and superimposed noise) in 1-D signals taken from 2-D interferograms. A second wavelet system, an adaptive wavelet system, will be designed and implemented to achieve distortion correction, analogous adaptive wavelet echo concellation.
ICA Unsupervised Learning Award
icon_mobile_dropdown
Blind source separation of convolutive mixtures
This paper introduces the blind source separation (BSS) of convolutive mixtures of acoustic signals, especially speech. A statistical and computational technique, called independent component analysis (ICA), is examined. By achieving nonlinear decorrelation, nonstationary decorrelation, or time-delayed decorrelation, we can find source signals only from observed mixed signals. Particular attention is paid to the physical interpretation of BSS from the acoustical signal processing point of view. Frequency-domain BSS is shown to be equivalent to two sets of frequency domain adaptive microphone arrays, i.e., adaptive beamformers (ABFs). Although BSS can reduce reverberant sounds to some extent in the same way as ABF, it mainly removes the sounds from the jammer direction. This is why BSS has difficulties with long reverberation in the real world. If sources are not "independent," the dependence results in bias noise when obtaining the correct separation filter coefficients. Therefore, the performance of BSS is limited by that of ABF. Although BSS is upper bounded by ABF, BSS has a strong advantage over ABF. BSS can be regarded as an intelligent version of ABF in the sense that it can adapt without any information on the array manifold or the target direction, and sources can be simultaneously active in BSS.
Real World Applications
icon_mobile_dropdown
Interference and noise adjusted principal components analysis for hyperspectral remote sensing image compression
Hyperspectral remote sensing images have high spectral resolution that enables accurate object detection, classification, and identification. But its vast data volume brings about problems in data transmission, data storage, and data analysis. How to reduce the data volume while keeping the important information for the following data exploitation is a challenging task. Principal Components Analysis (PCA) is a typical method for data compression, which re-arranges image information into the first several principal component images in terms of variance maximization. But variance is not a good criterion to rank images. Instead, signal-to-noise ratio (SNR) is a more reasonable criterion, and the resulting PCA is called Noise Adjusted Principal Components analysis (NAPCA). It is also known that interference is a very serious problem in hyperspectral remote sensing images, induced by many unknown and unwanted signal sources extracted by hyperspectral sensors. Signal-to-interference-plus-noise (SINR) was proposed as a more appropriate ranking criterion. The resulting PCA is referred to as Interference and Noise Adjusted PCA (INAPCA). In this paper, we will investigate the application of INAPCA to hyperspectral image compression, and compare it with the PCA and NAPCA-based compression. The focus is the analysis of their impacts on the following data exploitation (such as detection and classification). It is expected that using NAPCA and INAPCA higher detection and classification rates can be achieved with a comparable or even higher compression ratio. The results will be compared with popular wavelet-based compression methods, such as JPEG 2000, SPIHT, and SPECK.
Unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization
This paper presents an approach for simultaneous determination of endmembers and their abundances in hyperspectral imagery unmixing using a constrained positive matrix factorization (PMF). The algorithm presented here solves the constrained PMF using Gauss-Seidel method. This algorithm alternates between the endmembers matrix updating step and the abundance estimation step until convergence is achieved. Preliminary results using a subset of a HYPERION image taken in SW Puerto Rico are presented. These results show the potential of the proposed method to solve the unsupervised unmixing problem.
Real World Applications Poster Session
icon_mobile_dropdown
The joint time-frequency spectrogram structure of heptanes boilover noise
An experiment was conducted to study the noise characteristics in the boilover phenomena. The boilover occurs in the combustion of a liquid fuel floating on water. It will cause a sharp increase in burning rate and external radiation. Explosive burning of the fuel would cause potential safety consequence. Combustion noise accompanies the development of fire and displays different characteristics in typical period. These characteristics can be used to predict the start time of boilover. The acoustic signal in boilover procedure during the combustion of heptanes-water mixture is obtained in a set of experiments. Joint time-frequency analysis (JTFA) method is applied in the treatment of noise data. Several JTFA algorithms were used in the evaluation. These algorithms include Gabor, adaptive spectrogram, cone shape distribution, choi-williams distribution, Wigner-Ville Distribution, and Short Time Fourier Transform with different windows such as rectangular, Blackman, Hamming and Hanning. Time-frequency distribution patterns of the combustion noise are obtained, and they are compared with others from jet flow and small plastic bubble blow up.
Passive IR flexi-scope with two spectral colors for household screening of gastrointestinal disorders
Kenneth Byrd, Harold Szu
According to our generalized Shannon Sampling Theorem of developmental system biology, we should increase the sampling frequency of the passive Infrared (IR) spectrum ratio test, (I8~12mm / I3~5mm). This procedure proved to be effective in DCIS using the satellite-grade IR spectrum cameras for an early developmental symptom of the "angiogenesis" effect. Thus, we propose to augment the annual hospital checkup of, or biannual Colonoscopy, with an inexpensive non-imaging IR-Flexi-scope intensity measurement device which could be conducted regularly at a household residence without the need doctoral expertise or a data basis system. The only required component would be a smart PC, which would be used to compute the degree of thermal activities through the IR spectral ratio. It will also be used to keep track of the record and send to the medical center for tele-diagnosis. For the purpose of household screening, we propose to have two integrated passive IR probes of dual-IR-color spectrum inserted into the body via the IR fiber-optic device. In order to extract the percentage of malignancy, based on the ratio of dual color IR measurements, the key enabler is the unsupervised learning algorithm in the sense of the Duda & Hart Unlabelled Data Classifier without lookup table exemplars. This learning methodology belongs to the Natural Intelligence (NI) of the human brain, which can effortlessly reduce the redundancy of pair inputs and thereby enhance the Signal to Noise Ratio (SNR) better than any single sensor for the salient feature extraction. Thus, we can go beyond a closed data basis AI expert system to tailor to the individual ground truth without the biases of the prior knowledge.
Designs of solar voltaic cells based on carbon nano-tubes
Since the recent success of Freitag et al (2002 NL), Guo et al. (PR 2004) and J-U Lee (2005 APL)of GE have demonstrated about 5% conversion efficiency of Voltaic Cell using a homogeneous lithography fabrication of single wall Carbon NanoTubes (CNT, or SWNT) as the p-n junction diode tuned at 1.5μm CW IR laser diode under 10 mW. We apply a NanoRobot of Xi and Szu to build the heterogeneous structure of a bundle of 4 CNT's for 4 solar spectral bands in 4 polarization orientations, a total of 16 CNTs per single unit of solar cells. In so doing, we design a circuit of 3 electrodes for 16 sources which can increase the efficiency to 80% per unit capturing the solar irradiance. The Einstein photoelectrical effect generating electron-hole pairs in the one-dimensional semiconductor band gap material: Carbon NanoTubes will be collected by a bias voltage on the side.
Hearing loss treatment through stem cell therapy
Despite the many challenges cell therapy will revolutionize medicine. With the use of cell therapies, hearing loss can be restored. Cell therapies have also shown great promise in helping to repair catastrophic spinal injuries, and helping victims of paralysis regain movement. Cell therapy can be defined as a group of new techniques, or technologies, that rely on replacing diseased or dysfunctional cells with healthy, functioning ones. These new techniques are being applied to hearing loss and damaged ear components.
Reducing blocking artifacts in JPEG with Mill's Cross technique
JPEG is one of the standard Internet compression formats. With high compression rates, JPEG produces image flaws known as blocking artifacts. One method for reducing these artifacts is the application of a technique similar to that of Mill's Cross.
Jet noise analysis by Gabor spectrogram
A research was conducted to determine the functions of a set of nozzle pairs. The aeroacoustical performance of these pairs can be used to analyze the deformation of structure and change of jet condition. The jet noise signal was measured by a microphone placed in the radiation field of jet flow. In addition to some traditional methods used for analyzing noise both in time and frequency domain, Gabor spectrogram is adopted to obtain the joint time-frequency pattern of the jet noise under different jet conditions from nozzles with different structures. The jet noise from three nozzle pairs worked under two types of working conditions is treated by Gabor spectrogram. One condition is both nozzles in the nozzle pair keep their structure at a fixed chamber pressure, while another condition is one of these two nozzles' throat size decreases during the jet procedure under a fixed chamber pressure. Gabor spectrograms with different orders for the jet noise under the second condition are obtained and compared. Then a rational order is selected in analyzing the jet noise. Results are presented in this paper. The Gabor spectrogram patterns of these two conditions are with marked difference. The noise keeps its frequency peak during the whole jet procedure in the first condition. But there is a frequency peak shift in the second condition at a certain size of throat. The distribution of frequency peak along with the decrease of throat presents two states. This would be helpful for nozzle structure recognition.
Authenticated, private, and secured smart cards (APS-SC)
From historical perspective, the recent advancements in better antenna designs, low power circuitry integrations and inexpensive fabrication materials have made possible a miniature counter-measure against Radar, a clutter behaving like a fake target return called Digital Reflection Frequency Modulation (DRFM). Such a military counter-measure have found its way in the commerce as a near field communication known as Radio Frequency Identification (RFID), a passive or active item tag T attached to every readable-writable Smart Card (SC): Passports ID, medical patient ID, biometric ID, driver licenses, book ID, library ID, etc. These avalanche phenomena may be due to the 3rd Gen phones seeking much more versatile & inexpensive interfaces, than the line-of-sight bar-code optical scan. Despite of the popularity of RFID, the lacking of Authenticity, Privacy and Security (APS) protection restricted somewhat the wide spread commercial, financial, medical, legal, and militarily applications. Conventional APS approach can obfuscate a private passkey K of SC with the tag number T or the reader number R, or both, i.e. only T*K or R*K or both will appear on them, where * denotes an invertible operation, e.g. EXOR, but not limited to it. Then, only the authentic owner, knowing all, can inverse the operation, e.g. EXOR*EXOR= I to find K. However, such an encryption could be easily compromised by a hacker seeking exhaustively by comparison based on those frequently used words. Nevertheless, knowing biological wetware lesson for power of pairs sensors and Radar hardware counter-measure history, we can counter the counter-measure DRFM, instead using one RFID tag per SD, we follow the Nature adopting two ears/tags, e.g. each one holding portions of the ID or simply two different ID's readable only by different modes of the interrogating reader, followed by brain central processor in terms of nonlinear invertible shufflers mixing two ID bits. We prefer to adopt such a hardware-software combined hybrid approach because of a too limited phase space of a single RFID for any meaningful encryption approach. Furthermore, a useful biological lesson is not to put all eggs in one basket, "if you don't get it all, you can't hack it". According to the Radar physics, we can choose the amplitude, the frequency, the phase, the polarization, and two radiation energy supply principles, the capacitance coupling (~6m) and the inductance coupling (<1m), to code the pair of tags differently. A casual skimmer equipped with single-mode reader can not read all. We consider near-field and mid-field applications each in this paper. The near-field is at check-out counters or the convey-belt inventory involving sensitive and invariant data. The mid-field search & rescue involves not only item/person identification, but also the geo-location. If more RF power becomes cheaper & portable for longer propagation distance in the near future, then a triangulation with pair of secured readers, located at known geo-locations, could interrogate and identify items/persons and their locations in a GPS-blind environment.
Biomimetics
icon_mobile_dropdown
Detecting low-frequency functional connectivity in fMRI using unsupervised clustering algorithms
Recent research in functional magnetic resonance imaging (fMRI) revealed slowly varying temporally correlated fluctuations between functionally related areas. These low-frequency oscillations of less than 0.08 Hz appear to be a property of symmetric cortices, and they are known to be present in the motor cortex among others. These low-frequency data are difficult to detect and quantify in fMRI. Traditionally, user-based regions of interests (ROI) or "seed clusters" have been the primary analysis method. We propose in this paper to employ unsupervised clustering algorithms employing arbitrary distance measures to detect the resting state of functional connectivity. There are two main benefits using unsupervised algorithms instead of traditional techniques: (1) the scan time is reduced by finding directly the activation data set, and (2) the whole data set is considered and not a relative correlation map. The achieved results are evaluated for different distance metrics. The Euclidian metric implemented by the standard unsupervised clustering approaches is compared with a more general topographic mapping of proximities based on the correlation and the prediction error between time courses. Thus, we are able to detect functional connectivity based on model-free analysis methods implementing arbitrary distance metrics.
A biologically inspired neural oscillator network for geospatial analysis
Robert S. Rand, DeLiang Wang
A biologically plausible neurodynamical approach to scene segmentation based on oscillatory correlation theory is investigated. A network of relaxation oscillators, which is based on the Locally Excitatory Globally Inhibitory Oscillator Network (LEGION), is constructed and adapted to geospatial data with varying ranges and precision. This nonlinear dynamical network is capable of achieving segmentation of objects in a scene by the synchronization of oscillators that receive local excitatory inputs from a collection of local neighbors and desynchronization between oscillators corresponding to different objects. The original LEGION model is sensitive to several aspects of the data that are encountered in real imagery, and achieving good performance across these different data types requires the constant adjusting of parameters that control excitatory and inhibitory connections. In this effort, the connections in the oscillator network are modified to reduce this sensitivity with the goal to eliminate the need for parameter adjustment. We assess the ability of the proposed approach to perform natural and urban scene segmentation for geospatial analysis. Our approach is tested on simulated scene data as well as real imagery with varying gray shade ranges and scene complexity.
Real-time fusion of two polarization images overcoming hazy or misty days
Mehmet Kurum, Harold Szu
It seems pertinent to ask a real-time one step fusion algorithm how all these ways of exploiting polarized-light information can be utilized to make artificial sensors such as contrast enhancement and haze reduction, breaking camouflage, optical signaling, detecting particular polarization as compass. Therefore, with proper optical setup and computer vision analysis, polarization information can be captured and used to enhance traditional polarization-insensitive visual vision techniques. In this paper, most of the exciting ways in which animals make use of the various forms of polarized light prevailing in their visual worlds will be investigated. We present the basics of polarization sensing and then discuss integrated unsupervised polarization imaging sensors.
Thermodynamic free-energy minimization for unsupervised fusion of dual-color infrared breast images
This paper presents algorithmic details of an unsupervised neural network and unbiased diagnostic methodology, that is, no lookup table is needed that labels the input training data with desired outputs. We deploy the smart algorithm on two satellite-grade infrared (IR) cameras. Although an early malignant tumor must be small in size and cannot be resolved by a single pixel that images about hundreds cells, these cells reveal themselves physiologically by emitting spontaneously thermal radiation due to the rapid cell growth angiogenesis effect (In Greek: vessels generation for increasing tumor blood supply), shifting toward, according to physics, a shorter IR wavelengths emission band. If we use those exceedingly sensitive IR spectral band cameras, we can in principle detect whether or not the breast tumor is perhaps malignant through a thin blouse in a close-up dark room. If this protocol turns out to be reliable in a large scale follow-on Vatican experiment in 2006, which might generate business investment interests of nano-engineering manufacture of nano-camera made of 1-D Carbon Nano-Tubes without traditional liquid Nitrogen coolant for Mid IR camera, then one can accumulate the probability of any type of malignant tumor at every pixel over time in the comfort of privacy without religious or other concerns. Such a non-intrusive protocol alone may not have enough information to make the decision, but the changes tracked over time will be surely becoming significant. Such an ill-posed inverse heat source transfer problem can be solved because of the universal constraint of equilibrium physics governing the blackbody Planck radiation distribution, to be spatio-temporally sampled. Thus, we must gather two snapshots with two IR cameras to form a vector data X(t) per pixel to invert the matrix-vector equation X=[A]S pixel-by-pixel independently, known as a single-pixel blind sources separation (BSS). Because the unknown heat transfer matrix or the impulse response function [A] may vary from the point tumor to its neighborhood, we could not rely on neighborhood statistics as did in a popular unsupervised independent component analysis (ICA) mathematical statistical method, we instead impose the physics equilibrium condition of the minimum of Helmholtz free-energy, H = E - ToS. In case of the point breast cancer, we can assume the constant ground state energy Eo to be normalized by those benign neighborhood tissue, and then the excited state can be computed by means of Taylor series expansion in terms of the pixel I/O data. We can augment the X-ray mammogram technique with passive IR imaging to reduce the unwanted X-rays during the chemotherapy recovery. When the sequence is animated into a movie, and the recovery dynamics is played backward in time, the movie simulates the cameras' potential for early detection without suffering the PD=0.1 search uncertainty. In summary, we applied two satellite-grade dual-color IR imaging cameras and advanced military (automatic target recognition) ATR spectrum fusion algorithm at the middle wavelength IR (3 - 5μm) and long wavelength IR (8 - 12μm), which are capable to screen malignant tumors proved by the time-reverse fashion of the animated movie experiments. On the contrary, the traditional thermal breast scanning/imaging, known as thermograms over decades, was IR spectrum-blind, and limited to a single night-vision camera and the necessary waiting for the cool down period for taking a second look for change detection suffers too many environmental and personnel variabilities.
Simplifying Hill-based muscle models through generalized extensible fuzzy heuristic implementation
Traditional dynamic muscle models based on work initially published by A. V. Hill in 1938 often rely on high-order systems of differential equations. While such models are very accurate and effective, they do not typically lend themselves to modification by clinicians who are unfamiliar with biomedical engineering and advanced mathematics. However, it is possible to develop a fuzzy heuristic implementation of a Hill-based model-the Fuzzy Logic Implemented HIll-based (FLIHI) muscle model-that offers several advantages over conventional state equation approaches. Because a fuzzy system is oriented by design to describe a model in linguistics rather than ordinary differential equation-based mathematics, the resulting fuzzy model can be more readily modified and extended by medical practitioners. It also stands to reason that a well-designed fuzzy inference system can be implemented with a degree of generalizability not often encountered in traditional state space models. Taking electromyogram (EMG) as one input to muscle, FLIHI is tantamount to a fuzzy EMG-to-muscle force estimator that captures dynamic muscle properties while providing robustness to partial or noisy data. One goal behind this approach is to encourage clinicians to rely on the model rather than assuming that muscle force as an output maps directly to smoothed EMG as an input. FLIHI's force estimate is more accurate than assuming force equal to smoothed EMG because FLIHI provides a transfer function that accounts for muscle's inherent nonlinearity. Furthermore, employing fuzzy logic should provide FLIHI with improved robustness over traditional mathematical approaches.
Neural Network Classifier
icon_mobile_dropdown
Classifying launch/impact events of mortar and artillery rounds utilizing DWT-derived features and feedforward neural networks
Sachi Desai, Myron Hohil, Amir Morcos
Feature extraction methods based on the discrete wavelet transform (DWT) and multiresolution analysis are used to develop a robust classification algorithm that reliably discriminates between launch and impact artillery and/or mortar events via acoustic signals produced during detonation. Distinct characteristics are found within the acoustic signatures since impact events emphasize concussive and shrapnel effects, while launch events are similar to explosions, designed to expel and propel an artillery round from a gun. The ensuing signatures are readily characterized by variations in the corresponding peak pressure and rise time of the waveform, differences in the ratio of positive pressure amplitude to the negative amplitude, variations in the prominent frequencies associated with the blast events and variations in the overall duration of the resulting waveform. Unique attributes can also be identified that depend upon the properties of the gun tube, projectile speed at the muzzle, and the explosive/concussive properties associated with the events. In this work, the discrete wavelet transform is used to extract the time-frequency components characteristic of the aforementioned acoustic signatures at ranges exceeding 2km. The resulting decomposition of the acoustic transient signals is used to produce a separable feature space. Highly reliable classification is achieved with a feedforward neural network classifier trained on a sample space derived from the distribution of wavelet coefficients and higher frequency details found within different levels of the multiresolution decomposition. The neural network developed herein provides a capability to classify events (as either launch (LA) or impact (IM)) with an accuracy that exceeds 88%.
Carbon nanotube noise characterization
Without relying on the cumbersome liquid Nitrogen coolant, necessary for the conventional mid IR (3~5 μm wavelength) cameras, we designed a new mid wave IR camera, according to biomimetic human vision 2 color receptor system. We suspended over the non-cryogenic long wave IR (HgCdTe) CCD backplane with Single Wall Carbon NanoTubes (SWNT) pixels, which have the band gap energy εBG ~1/d tuned at the few nanometer diameter d for the mid wave. To ascertain noise contribution, in this paper, we provided a simple derivation of frequency-dependent Einstein transport coefficient D(k) = PSD(k), based on Kubo-Green (KG) formula, which is convenient to accommodate experimental data. We conjectured a concave shape of convergence 1/kα at α=-2 power law at optical frequency against the overly simplest 1-D noise model about 1/2 KBT, and the ubiquitous power law 1/kα where α=1 gave a convex shape of divergence. Our formula is based on the Cauchy distribution [1+(kd)2]-1 derived from the Fourier Transform of the correlation of charge-carrier wave function been scattered against lattice phonons spreading over the tubular surface of the diameter d, similar to the Lorentzian line shape in molecular spectral exp(-|x|/d). According to the band gap formula of SWNT, a narrower tube of SWNT worked similarly as Field Emission Transistor (FET) can be tuned at higher optical frequencies revealing finer details of lattice spacing, a and b. Experimental determination of our proposed multiple scales responses formula remained to be confirmed.
Authenticity and privacy of a team of mini-UAVs by means of nonlinear recursive shuffling
Ming-Kai Hsu, Patrick Baier, Ting N. Lee, et al.
We have developed a real-time EOIR video counter-jittering sub-pixel image correction algorithm for a single mini- Unmanned Air Vehicle (m-UAV) for surveillance and communication (Szu et al. SPIE Proc. V 5439 5439, pp.183-197, April 12, 2004). In this paper, we wish to plan and execute the next challenge---- a team of m-UAVs. The minimum unit for a robust chain saw communication must have the connectivity of five second-nearest-neighbor members with a sliding, arbitrary center. The team members require an authenticity check (AC) among a unit of five, in order to carry out a jittering mosaic image processing (JMIP) on-board for every m-UAV without gimbals. The JMIP does not use any NSA security protocol ("cardinal rule: no-man, no-NSA codec"). Besides team flight dynamics (Szu et al "Nanotech applied to aerospace and aeronautics: swarming,' AIAA 2005-6933 Sept 26-29 2005), several new modules: AOA, AAM, DSK, AC, FPGA are designed, and the JMIP must develop their own control, command and communication system, safeguarded by the authenticity and privacy checks presented in this paper. We propose a Nonlinear Invertible (deck of card) Shuffler (NIS) algorithm, which has a Feistel structure similar to the Data Encryption Standard (DES) developed by Feistel et. al. at IBM in the 1970's; but DES is modified here by a set of chaotic dynamical shuffler Key (DSK), as re-computable lookup tables generated by every on-board Chaotic Neural Network (CNN). The initializations of CNN are periodically provided by the private version of RSA from the ground control to team members to avoid any inadvertent failure of broken chain among m-UAVs. Efficient utilization of communication bandwidth is necessary for a constantly moving and jittering m-UAV platform, e.g. the wireless LAN protocol wastes the bandwidth due to a constant need of hand-shaking procedures (as demonstrated by NRL; though sensible for PCs and 3rd gen. mobile phones). Thus, the chaotic DSK must be embedded in a fault-tolerant Neural Network Associative Memory for the error-resilientconcealment mosaic image chip re-sent. However, the RSA public and private keys, chaos typing and initial value are given on set or sent to each m-UAV so that each platform knows only its private key. AC among 5 team members are possible using a reverse RSA protocol. A hashed image chip is coded by the sender's private key and nobody else knows in order to send to it to neighbors and the receiver can check the content by using the senders public key and compared the decrypted result with on-board image chips. We discover a fundamental problem of digital chaos approach in a finite state machine, of which a fallacy test of a discrete version is needed for a finite number of bits, as James Yorke advocated early. Thus, our proposed chaotic NIS for bits stream protection becomes desirable to further mixing the digital CNN outputs. The fault tolerance and the parallelism of Artificial Neural Network Associative Memory are necessary attributes for the neighborhood smoothness image restoration. The associated computational cost of O(N2) deems to be worthy, because the Chaotic version CNN of N-D can further provide the privacy only for the lost image chip (N=8x8) re-sent requested by its neighbors and the result is better performed than a simple 1-D logistic map. We gave a preliminary design of low end of FPGA firmware that to compute all on board seemed to be possible.
A bio-nanorobot design for drosophila therapeutic cloning
To investigate Somatic Cell Nuclear Transfer (SCNT), we choose the Drosophila cloning based on a recent experiment (Haigh, MacDonald, Lioyd, Gen. V.169,1165, 2005) to be improving the adulthood rate in 2-week turn-around time. Original 1% success rate might be due to three less certain key steps: (i) The double membranes of a nucleus has at its pore led to the attached Rough Endoplasmic Reticulum (ER), passing the genetic instruction to assemble amino acids, proteins and lipid at its smooth end. Also, any mismatch of nucleus with mitochondria (MT) having own small genome for energy production had led to reprogramming failure. (D. Wallace, UC Irvine, Nature,Vol. 439, pp.653). We ask "whether a guest DNA shall come with its servants, ER, MT, etc or not." It seemed to be logical to have a whole package replaced the embryonic host cell, equipped with all housekeeping, energy production and mitosis functionalities except the genetic information. To answer this hypothesis, we design a bio-NanoRobot having a surgical precision in removing the desired nucleus with or without its attached ER and MT material. The design is based on a real-time multiplexing principle of combining both the soft-contact-vision of the Nobel Laureate Binning called Atomic Force Microscope (AFM) and the hard-grasp-action called NanoRobotTM by Xi and Szu, 2004. However, applying it, we must re-design a new bio-NanoRobot, consisting of two parts: (a) multiple resolution analysis (MRA) using AI to control a dual-resolution vision system: the soft-contact-vision AFM co-registered with a on-contact high resolution imaging; and (b) two cantilever arms capable to hold and enucleate a cell. The calibration and automation are controlled by AI Case-Based reasoning (CBR) together with AI Blackboard (BB) of the taxonomy, necessary for integrating different tool's tolerance and resolution at the same location. Moreover, keeping the biological sample in one place, while a set of tools rotates upon it similar to a set of microscopic lenses, we can avoid the non-real-time re-imaging, and inadvertent contamination. Applying an imposing electrical field, we can take the advantage of structure differences in smooth nuclear membranes inducing Van der Waal's forces versus random cytoplasm. (ii) The re-programming of transplanted cells to the ground state is unclear and usually relies on electrochemical means tested systematically in a modified 3D Caltech micro-fluidics. (iii) Our real-time MRA video-manipulator can elucidate the mitosis's tread-mill assembly mechanism in the development course of pluripotent stem cell differentiation into specialized tissue cell engineering. Such a combination bio-NanoRobot and micro-fluidic massive parallel assembly-line approach might not only replace the aspirating pipette with a self- enucleating Drosophila embryonic eggs, but also genetically reproduce a large amount of cloning embryonic eggs repeatedly for various re-programming hypotheses.
4D time-frequency representation for binaural speech signal processing
Hearing is the ability to detect and process auditory information produced by the vibrating hair cilia residing in the corti of the ears to the auditory cortex of the brain via the auditory nerve. The primary and secondary corti of the brain interact with one another to distinguish and correlate the received information by distinguishing the varying spectrum of arriving frequencies. Binaural hearing is nature's way of employing the power inherent in working in pairs to process information, enhance sound perception, and reduce undesired noise. One ear might play a prominent role in sound recognition, while the other reinforces their perceived mutual information. Developing binaural hearing aid devices can be crucial in emulating the working powers of two ears and may be a step closer to significantly alleviating hearing loss of the inner ear. This can be accomplished by combining current speech research to already existing technologies such as RF communication between PDAs and Bluetooth. Ear Level Instrument (ELI) developed by Micro-tech Hearing Instruments and Starkey Laboratories is a good example of a digital bi-directional signal communicating between a PDA/mobile phone and Bluetooth. The agreement and disagreement of arriving auditory information to the Bluetooth device can be classified as sound and noise, respectively. Finding common features of arriving sound using a four coordinate system for sound analysis (four dimensional time-frequency representation), noise can be greatly reduced and hearing aids would become more efficient. Techniques developed by Szu within an Artificial Neural Network (ANN), Blind Source Separation (BSS), Adaptive Wavelets Transform (AWT), and Independent Component Analysis (ICA) hold many possibilities to the improvement of acoustic segmentation of phoneme, all of which will be discussed in this paper. Transmitted and perceived acoustic speech signal will improve, as the binaural hearing aid will emulate two ears in sound localization, speech understanding in noisy environment, and loudness differentiation.
Smart internet search engine through 6W
Stephen Goehler, Masud Cader, Harold Szu
Current Internet search engine technology is limited in its ability to display necessary relevant information to the user. Yahoo, Google and Microsoft use lookup tables or indexes which limits the ability of users to find their desired information. While these companies have improved their results over the years by enhancing their existing technology and algorithms with specialized heuristics such as PageRank, there is a need for a next generation smart search engine that can effectively interpret the relevance of user searches and provide the actual information requested. This paper explores whether a smarter Internet search engine can effectively fulfill a user's needs through the use of 6W representations.
Multimedia data authentication in wavelet domain
With the wide application of multimedia data, multimedia content protection becomes urgent. Till now, various means have been reported, which can be classified into several types according to their functionalities, such as data encryption, digital watermarking or data authentication. They are used to protect multimedia data's confidentiality, ownership and integrity, respectively. For multimedia data authentication, some approaches have been proposed. In this paper, a wavelet-based multi-feature semi-fragile authentication scheme is proposed. According to the approximation component and the energy relationship between the subbands of the detail component, global feature and local feature are both generated. Then, the global watermark and local watermark are generated from global feature and local feature, respectively. The watermarks are then embedded into the multimedia data themselves in the wavelet domain. Both the feature extraction and embedding processes are controlled by secret keys to improve the security of the proposed scheme. In the receiver end, the extracted watermark and the one generated from the received image are compared to determine the tampered location. A new authentication method is designed and it is proved valid in the experiments. This authentication scheme is robust to general compression, sensitive to cutting, pasting or modification, efficient in real-time operation, and secure for practical applications.
Invited Session Smart Firmwares
icon_mobile_dropdown
Turbo LMS algorithm: Supercharger meets adaptive filter
Adaptive digital filters (ADFs) are, in general, the most sophisticated and resource intensive components of modern digital signal processing (DSP) and communication systems. Improvements in performance or the complexity of ADFs can have a significant impact on the overall size, speed, and power properties of a complete system. The least mean square (LMS) algorithm is a popular algorithm for coefficient adaptation in ADF because it is robust, easy to implement, and a close approximation to the optimal Wiener-Hopf least mean square solution. The main weakness of the LMS algorithm is the slow convergence, especially for non Markov-1 colored noise input signals with high eigenvalue ratios (EVRs). Since its introduction in 1993, the turbo (supercharge) principle has been successfully applied in error correction decoding and has become very popular because it reaches the theoretical limits of communication capacity predicted 5 decades ago by Shannon. The turbo principle applied to LMS ADF is analogous to the turbo principle used for error correction decoders: First, an "interleaver" is used to minimize crosscorrelation, secondly, an iterative improvement which uses the same data set several times is implemented using the standard LMS algorithm. Results for 6 different interleaver schemes for EVR in the range 1-100 are presented.
Spectroscopic modeling of nitro group in explosives
Doris Núñez-Quintero, Samuel P. Hernández-Rivera
Calibration is the process of constructing a mathematical model to relate the output of an instrument to properties of samples. Prediction is the process of using the model to predict properties of a sample given and instrument output. A statistical characterization, based on Discriminant Analysis (DA), of explosive substances allows a characterization and classification of the spectroscopic properties of the effect of nitro group in accordance with the molecular structure. This characterization should help for predicting the nitro group effect in other explosives substances and be a primary actor in sensor design based on IR and Raman Spectroscopies. The goal of this work was to develop a statistical model for the spectroscopic behavior for the nitro group in nitrogen based explosives (nitroexplosives) using DA. The variables used in this analysis were the Raman shift and IR wavenumber (spectral locations) of the symmetric and asymmetric mode of the nitro group. A second group of variables were the absorbance and Raman scattering intensity. The KBr technique was used for running the samples in FTIR. The samples were measured at a 4 cm-1 resolution and 32 scans. Spectra were collected using Bruker OPUS version 4.2 software, in the range of 400-4000 wavenumbers (cm-1). Raman spectra of samples were collected from neat samples deposited on stainless steel slides (for solids) and in melting point capillary tubes. Raman analysis was carried out by using a confocal Renishaw Raman Microspectrometer Model RM2000 equipped with solid state diode laser system emitting at a wavelength of 532 nm as the excitation source. A statistical model using forty five explosives is presented.
PCNN pre-processor stage for the optical broadcast neural network processor
Horacio Lamela, Marta Ruiz-Llata, Matías Jiménez, et al.
In this paper we investigate a hardware Pulse Couple Neural Network (PCNN) to be used as the pre-processing stage for a vision system which uses as processing core the Optical Broadcast Neural Network (OBNN) Processor [Optical Engineering Letters 42 (9), 2488 (2003)]. The temporal patterns are to remain constant independently of the position of the spatial pattern in the input image and its orientation. The objective is to obtain synchronous temporal patterns, with fixed pulse rates, from a determined spatial pattern.
Light-weight cyptography for resource constrained environments
We give a survey of "light-weight" encryption algorithms designed to maximise security within tight resource constraints (limited memory, power consumption, processor speed, chip area, etc.) The target applications of such algorithms are RFIDs, smart cards, mobile phones, etc., which may store, process and transmit sensitive data, but at the same time do not always support conventional strong algorithms. A survey of existing algorithms is given and new proposal is introduced.
Parallel distributed RSOM tree for pattern classification
Data clustering requires high-performance computers to get results in a reasonable amount of time, particularly for large-scale databases. A feasible approach to reduce processing time is to implement on scalable parallel computers. Thus, RSOM tree method is proposed. Firstly a SOM net, as the root node, is trained. Then, all trained samples are allocated to the output nodes of the root node according to WTA-criterion. Thirdly, the parameters of discriminability are calculated form the samples for each node. If discriminable, the node will be SOM-split and labeled as an internal node, otherwise an end node, and the split terminates. Recursively check or split all nodes until there is no node meeting with the discrimination criteria. Finally, a RSOM tree is obtained. In this process, several kinds of control-factors, e.g. inter-class and intra-class discrimination criteria, layer number, sample number, and correct ratio of classification, are obtained from the data in each node. Accordingly the good choice of the RSOM structure can be obtained, and the generalization capability is assured. This RSOM tree method is of the nature of parallelism, and can be implemented on scalable parallel computers, including high performance Cluster-computers, and local or global computer networks. The former becomes more and more attractive except for its expensiveness, while the latter is much more economic rewarding, and might belong to Grid- Computation to a great extend. Based on the above two kinds of hardware systems, the performance of this method is tested with the large feature data sets which are extracted from a large amount of video pictures.
A novel multi-strategy watermark embedding technique
Gui Feng, QiWei Lin
Different digital watermarking schemes had been proposed to address this issue of ownership identification. Early work on digital watermarking focused on information hiding in the spatial domain or transform domain respectively. Usually the watermark recovered result was not as satisfaction as the demand. Some multi-describing techniques were proposed for watermark embedding lately. Enlightened by these techniques, a novel blind digital image watermarking algorithm based on multi strategy is put forward in this paper. The watermark is embedded in multi-resolution wavelet transform domain of the original image. Based on spread spectrum techniques, the algorithm is composed of three new techniques to improve robustness, imperceptibility and security. These new techniques are as follow: First, multi- intensity embedding technique is adopted in the embedding process. Because the intensity of watermark has different influences to wavelet coefficient in different resolution layer, so using different intensity in corresponding layer, we can gain the merit of stronger anti-attack ability and imperceptibility Second, applying spread spectrum code to permute the original watermark so a new scrambling watermark is established. By reducing the effect of destroyed watermark image, the technique has the ability of anti-clipping, moreover, the technique improves the security of watermarking for needing scrawling password to extract the original watermark. Third, interlaced watermark embedding technique is introduced. In this technique, we interlace several copies of watermark in different resolution to embed in wavelet transform domain. As a result the recovered watermark is shown better performance after various attacks.
Smart Firmwares II
icon_mobile_dropdown
Performance evaluation based on cluster validity indices in medical imaging
Exploratory data-driven methods such as unsupervised clustering are considered to be hypothesis-generating procedures, and are complementary to the hypothesis-led statistical inferential methods in functional magnetic resonance imaging (fMRI). The major problem with clustering of real bioimaging data is that of deciding how many clusters are present. This motivates the application of cluster validity techniques in order to quantitatively evaluate the results of the clustering algorithm. In this paper, we apply three different cluster validity techniques, namely, Kim's index, Calinski Harabasz index, and the intraclass index to the evaluation of the clustering results of fMRI data. The benefits and major limitations of these cluster validity techniques are discussed based on the achieved results of several datasets.
Classification of infrasound events using hermite polynomial preprocessing and radial basis function neural networks
A method of infrasonic signal classification using hermite polynomials for signal preprocessing is presented. Infrasound is a low frequency acoustic phenomenon typically in the frequency range 0.01 Hz to 10 Hz. Data collected from infrasound sensors are preprocessed using a hermite orthogonal basis inner product approach. The hermite preprocessed signals result in feature vectors that are used as input to a parallel bank of radial basis function neural networks (RBFNN) for classification. The spread and threshold values for each of the RBFNN are then optimized. Robustness of this classification method is tested by introducing unknown events outside the training set and counting errors. The hermite preprocessing method is shown to have superior performance compared to a standard cepstral preprocessing method.
A zero-watermarking algorithm based on DWT and chaotic modulation
Hanqiang Cao, Hua Xiang, Xutao Li, et al.
Digital watermarking is an efficacious technique to protect the copyright and ownership of digital information. But in the traditional methods of watermarking images, the information of original image will be distorted more or less. Facing this problem, a new watermarking approach, zero-watermarking technique, is proposed. The zero-watermarking approach changes the traditional doings that watermarking is embedded into images, and makes the watermarked image distortion-free. Zero-watermarking can successfully solve the conflict between invisibility and robustness. In this paper, a digital image zero-watermarking method based on discrete wavelet transform and chaotic modulation is proposed. The zero-watermarking algorithm based on DWT and chaos modulation consists of watermark embedding and detecting processes. The watermark embedding process is as follow: First, the original image is decomposed to three-level in wavelet domain. Second, some low frequency wavelet coefficients of original image are selected. The selection of the wavelet coefficients is random by chaotic modulation. Third, the character of coefficients selected is used to construct the character watermark. For each coefficient, in comparison with the adjacent coefficient, we can get the character watermark. The watermark extracting process is invert process. The location of the coefficients being extracted is also determined by chaotic sequence. The experimental results show that the watermarking method is invisible and robust against some image processing such as median filtering, JPEG compression, additive Gaussian noise, cropping and rotation attacks and so on. If the initial value of chaos is unknown, the character watermarking can't be extracted correctly.