Show all abstracts
View Session
- Front Matter: Volume 7703
- Wavelet Pioneer Award
- Wavelets Applications I
- Wavelets Leadership Award
- Wavelets Applications II
- ICA Unsupervised Learning Award
- Unsupervised Learning and ICA
- Learning Theory and Applications
- Nano-Engineering
- Smart Sensor Systems and Miniaturization
- Biomedical Wellness Award for Applying Computational Intelligence to Image Diagnosis
- Biomedical Wellness Applications
- Wellness Smart Sensors
- System Biology Pioneer Award
- System Biology
- Smart Sensors Applications
Front Matter: Volume 7703
Front Matter: Volume 7703
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7703, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Wavelet Pioneer Award
Detection and classification of virus from electron micrograms
Jan-Olov Strömberg
Show abstract
I will present a PhD project were Diffusion Geometry is used in classification
of virus particles in cell kernels from electron micrograms. I will
give a very short introduction to Diffusion Geometry and discuss the main
classification steps. Some preliminary result from a Master Thesis will be presented.
Wavelets Applications I
SERS-based viral fingerprinting: current capabilities and challenges
J. D. Driskell,
J. L. Abell,
R. A. Dluhy,
et al.
Show abstract
Silver nanorod array substrates are fabricated by oblique angle deposition and characterized for optimal SERS
performance. Using UV-visible-NIR spectrophotometry we show that the nanorods have a transverse surface
plasmon resonance mode at ~357 nm and a broad absorbance spanning 600-800 nm when excited along the
longitudinal direction. We demonstrate that SERS enhancement is optimized using an excitation wavelength of 633
or 785 nm. The large area uniformity in SERS signal (<10% variation) and reproducibility among preparations
(<15% variation) provides a unique opportunity for SERS-based whole-organism fingerprinting. Egg prepared avian
influenza virus and clinical sputum samples of human influenza virus were investigated to demonstrate SERS-based
detection of a virus in a complex sample matrix and to assess the effect of different background matrices on the
detection of similar viruses.
Asymmetric GT of social networks
Show abstract
Web citation indexes are computed according to a data vector X collected from the frequency of user accesses,
citations weighted by other sites' popularities, and modified by the financial sponsorship in a proprietary manner. The
indexing determining the information to be retrieved by the public should be made responsible transparently in at least
two ways. One shall balance the inbound linkages pointed at the specific i-th site called the popularity (see paper for equation)
with the outbound linkages (see paper for equation) called the risk factor before the release of new information as environmental
impact analysis. The relationship between these two factors cannot be assumed equivalent (undirected) as in the case of
many mainstream Graph Theory (GT) models.
Video surveillance of passengers with mug shots
Show abstract
The authority officer relies on facial mug-shots to spot suspects among crowds. Passing through a check point, the facial
displays and printouts operate in low resolution fixed poses. Thus, a databases-cuing video is recommended for real-time
surveillance with Aided-Target Recognition (AiTR) prompting the inspector taking a closer second look at a specific
passenger. Taking advantage of commercial available Face Detection System on Chips (SOC) at 0.04sec, we develop a
fast and smart algorithm to sort facial poses among passengers. We can increase the overlapping POFs (pixels on faces)
in matching mug shots at arbitrary poses with sorted facial poses. Lemma: We define the long exposure as time average
of facial poses and the short exposure as single facial pose in a frame of video in 30 Hz. The fiduciary triangle is defined
among two eyes and nose-top. Theorem Self-Reference Matched Filtering (Szu et al. Opt Comm. 1980; JOSA, 1982) to
Facial-Pose: If we replace the desirable output of Weiner filter as the long exposure, then the filter can select a short
exposure as the normal view. Corollary: Given a short exposure as normal view, the fiduciary triangle can decide all
poses from left-to-right and top-to-down.
Wavelets Leadership Award
Recognizing persons by their iris patterns
John Daugman
Show abstract
Iris recognition provides real-time, high confidence identification of persons by analysis of the random
patterns that are visible within the iris of an eye from some distance. Because the iris is a protected,
internal, organ whose random texture is epigenetic and stable over the lifespan, it can serve as a
living password. Recognition decisions are made with confidence levels high enough to support rapid
exhaustive searches through national-sized databases. The principle that underlies these algorithms is
the failure of an efficient test of statistical independence involving more than 200 degrees-of-freedom,
based on phase sequencing each iris pattern with quadrature 2D wavelets. Different persons always
pass this test of statistical independence, but images from the same iris almost always fail this
test of independence. Database search speeds are around 1 million persons per second per CPU.
Data from 200 billion cross-comparisons between different eyes will be presented in this talk, using
a database consisting of 632,500 iris images acquired in the United Arab Emirates in a networked
national border-crossing security system which performs, every day, about 9 billion iris comparisons
using these algorithms. Current research efforts with this technology aim to make it more tolerant
of difficult conditions of iris capture, such as "iris on the move," at a distance, and off-axis.
Wavelets Applications II
Exploiting iris dynamics
Show abstract
The human iris is a circular curtain over the light entrance pupil which is controlled directly by the intensity of blue light
from photosensitive ganglions in the retina within the eye. The human iris dynamic is remarkable in that it is capable of
shrinking concentrically along the radial direction by a factor 4 from 8mm to 2mm, and constantly oscillates in 1/2 second periodicity. Pupil dilation and contraction causes the iris texture to undergo nonlinear deformation with discrete
components and minutia features. Thus, iris recognition must be scale invariant due to the pupil dynamics. We propose
the Mandelbrot fractal dimension count of minutia iris details, at different intensity thresholds, in dilation-invariant
wedge-boxes, formed at specific angular sizes, but spatially varying over 4 90° quadrants due to the cellular growth
under the gravity. Despite the concentric dynamic, we have sought an invariant fractal dimensionality in the circular
direction and discovered the non-isotropic effect, departed from the simple Richardson fractal law. Furthermore, we
choose an optimum Rayleigh criterion λ/D matching the robust fine resolution scale for the given lens aperture D and the illumination wavelength λ for a potential application from a distant, with the help of comprehensive biometric including
iris.
Multiscale directional filtering of noisy InSAR phase images
Show abstract
In this work, we present a new approach for the problem of interferometric phase noise reduction in synthetic
aperture radar interferometry based on the shearlet representation. Shearlets provide a multidirectional and
multiscale decomposition that have advantages when dealing with noisy phase fringes over standard filtering
methods. Using a shearlet decomposition of a noisy phase image, we can adaptively estimate a phase representation
in a multiscale and anisotropic fashion. Such denoised phase interferograms can be used to provide
much better digital elevation maps (DEM). Experiments show that this method performs significantly better
than many competitive methods.
ICA Unsupervised Learning Award
Proactive learning for artificial cognitive systems
Show abstract
The Artificial Cognitive Systems (ACS) will be developed for human-like functions such as vision, auditory, inference,
and behavior. Especially, computational models and artificial HW/SW systems will be devised for Proactive Learning
(PL) and Self-Identity (SI). The PL model provides bilateral interactions between robot and unknown environment
(people, other robots, cyberspace). For the situation awareness in unknown environment it is required to receive audiovisual
signals and to accumulate knowledge. If the knowledge is not enough, the PL should improve by itself though
internet and others. For human-oriented decision making it is also required for the robot to have self-identify and
emotion. Finally, the developed models and system will be mounted on a robot for the human-robot co-existing society.
The developed ACS will be tested against the new Turing Test for the situation awareness. The Test problems will consist of several video clips, and the performance of the ACSs will be compared against those of human with several levels of cognitive ability.
Unsupervised Learning and ICA
Electronic tongue system for remote multi-ion sensing using blind source separation and wireless sensor network
Show abstract
This paper presents an electronic tongue system with blind source separation (BSS) and wireless sensor network (WSN)
for remote multi-ion sensing applications. Electrochemical sensors, such as ion-sensitive field-effect transistor (ISFET)
and extended-gate field-effect transistor (EGFET), only provide the combined concentrations of all ions in aqueous
solutions. Mixed hydrogen and sodium ions in chemical solutions are observed by means of H+ ISFET and H+ EGFET
sensor array. The BSS extracts the concentration of individual ions using independent component analysis (ICA). The
parameters of ISFET and EGFET sensors serve as a priori knowledge that helps solve the BSS problem. Using wireless
transceivers, the ISFET/EGFET modules are realized as wireless sensor nodes. The integration of WSN technology into
our electronic tongue system with BSS capability makes distant multi-ion measurement viable for environment and
water quality monitoring.
Feature extraction and selection strategies for automated target recognition
Show abstract
Several feature extraction and selection methods for an existing automatic target recognition (ATR) system
using JPLs Grayscale Optical Correlator (GOC) and Optimal Trade-Off Maximum Average Correlation Height
(OT-MACH) filter were tested using MATLAB. The ATR system is composed of three stages: a cursory regionof-
interest (ROI) search using the GOC and OT-MACH filter, a feature extraction and selection stage, and a
final classification stage. Feature extraction and selection concerns transforming potential target data into more
useful forms as well as selecting important subsets of that data which may aide in detection and classification.
The strategies tested were built around two popular extraction methods: Principal Component Analysis (PCA)
and Independent Component Analysis (ICA). Performance was measured based on the classification accuracy and free-response receiver operating characteristic (FROC) output of a support vector machine(SVM) and a neural net (NN) classifier.
Border security and surveillance system with smart cameras and motes in a Sensor Web
Show abstract
In this paper we describe a prototype surveillance system that leverages smart sensor motes, intelligent video, and
Sensor Web technologies to aid in large area monitoring operations and to enhance the security of borders and
critical infrastructures. Intelligent video has emerged as a promising tool amid growing concern about border
security and vulnerable entry points. However, numerous barriers exist that limit the effectiveness of surveillance
video in large area protection; such as the number of cameras needed to provide coverage, large volumes of data to
be processed and disseminated, lack of smart sensors to detect potential threats and limited bandwidth to capture and
distribute video data. We present a concept prototype that addresses these obstacles by employing a Smart Video
Node in a Sensor Web framework. Smart Video Node (SVN) is an IP video camera with automated event detection
capability. SVNs are cued by inexpensive sensor motes to detect the existence of humans or vehicles. Based on
sensor motes' observations cameras are slewed in to observe the activity and automated video analysis detects
potential threats to be disseminated as "alerts". Sensor Web framework enables quick and efficient identification of
available sensors, collects data from disparate sensors, automatically tasks various sensors based on observations or
events received from other sensors, and receives and disseminates alerts from multiple sensors. The prototype
system is implemented by leveraging intuVision's intelligent video, Northrop Grumman's sensor motes and
SensorWeb technologies. Implementation of a deployable system with Smart Video Nodes and sensor motes within
the SensorWeb platform is currently underway. The final product will have many applications in commercial,
government and military systems.
Smart sensing surveillance system
Show abstract
Unattended ground sensor (UGS) networks have been widely used in remote battlefield and other tactical applications
over the last few decades due to the advances of the digital signal processing. The UGS network can be applied in a
variety of areas including border surveillance, special force operations, perimeter and building protection, target
acquisition, situational awareness, and force protection. In this paper, a highly-distributed, fault-tolerant, and energyefficient
Smart Sensing Surveillance System (S4) is presented to efficiently provide 24/7 and all weather security
operation in a situation management environment. The S4 is composed of a number of distributed nodes to collect,
process, and disseminate heterogeneous sensor data. Nearly all S4 nodes have passive sensors to provide rapid omnidirectional
detection. In addition, Pan- Tilt- Zoom- (PTZ) Electro-Optics EO/IR cameras are integrated to selected nodes
to track the objects and capture associated imagery. These S4 camera-connected nodes will provide applicable advanced
on-board digital image processing capabilities to detect and track the specific objects. The imaging detection operations
include unattended object detection, human feature and behavior detection, and configurable alert triggers, etc. In the
S4, all the nodes are connected with a robust, reconfigurable, LPI/LPD (Low Probability of Intercept/ Low Probability of
Detect) wireless mesh network using Ultra-wide band (UWB) RF technology, which can provide an ad-hoc, secure mesh
network and capability to relay network information, communicate and pass situational awareness and messages. The
S4 utilizes a Service Oriented Architecture such that remote applications can interact with the S4 network and use the
specific presentation methods. The S4 capabilities and technologies have great potential for both military and civilian
applications, enabling highly effective security support tools for improving surveillance activities in densely crowded
environments and near perimeters and borders. The S4 is compliant with Open Geospatial Consortium - Sensor Web
Enablement (OGC-SWE®) standards. It would be directly applicable to solutions for emergency response personnel,
law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor
networks with handheld or body-worn interface devices.
Learning Theory and Applications
Learning one-to-many mapping functions for audio-visual integrated perception
Show abstract
In noisy environment the human speech perception utilizes visual lip-reading as well as audio phonetic classification.
This audio-visual integration may be done by combining the two sensory features at the early stage. Also, the top-down
attention may integrate the two modalities. For the sensory feature fusion we introduce mapping functions between the
audio and visual manifolds. Especially, we present an algorithm to provide one-to-many mapping function for the videoto-
audio mapping. The top-down attention is also presented to integrate both the sensory features and classification
results of both modalities, which is able to explain McGurk effect. Each classifier is separately implemented by the
Hidden-Markov Model (HMM), but the two classifiers are combined at the top level and interact by the top-down attention.
Multiple optimal learning factors for feed-forward networks
Sanjeev S. Malalur,
Michael T. Manry
Show abstract
A batch training algorithm for feed-forward networks is proposed which uses Newton's method to estimate a vector of
optimal learning factors, one for each hidden unit. Backpropagation, using this learning factor vector, is used to modify
the hidden unit's input weights. Linear equations are then solved for the network's output weights. Elements of the new
method's Gauss-Newton Hessian matrix are shown to be weighted sums of elements from the total network's Hessian. In
several examples, the new method performs better than backpropagation and conjugate gradient, with similar numbers of
required multiplies. The method performs as well as or better than Levenberg-Marquardt, with several orders of
magnitude fewer multiplies due to the small size of its Hessian.
CORDIC algorithms for SVM FPGA implementation
Show abstract
Support Vector Machines are currently one of the best classification algorithms used in a wide number of applications.
The ability to extract a classification function from a limited number of learning examples keeping in the structural risk
low has demonstrated to be a clear alternative to other neural networks.
However, the calculations involved in computing the kernel and the repetition of the process for all support vectors in the
classification problem are certainly intensive, requiring time or power consumption in order to function correctly. This
problem could be a drawback in certain applications with limited resources or time. Therefore simple algorithms
circumventing this problem are needed.
In this paper we analyze an FPGA implementation of a SVM which uses a CORDIC algorithm for simplifying the
calculation of as specific kernel greatly reducing the time and hardware requirements needed for the classification,
allowing for powerful in-field portable applications. The algorithm is and its calculation capabilities are shown. The full
SVM classifier using this algorithm is implemented in an FPGA and its in-field use assessed for high speed low power
classification.
Nano-Engineering
CONTACT: sensors for aerospace and Fano-resonance photonic crystal cavities
Show abstract
CONTACT or Consortium for Nanomaterials for Aerospace Commerce and Technology is a cooperative
program between the Air Force Research Laboratory and seven Texas universities focused on four research areas
in aerospace. This paper summarizes recent developments in one of those areas, sensors, for eventual use in
aircraft and spacecraft. We report direct measurement of spectrally selective absorption properties of PbSe and
PbS colloidal quantum dots (CQDs) in Si nanomembrane photonic crystal cavities on flexible plastic
polyethylene terephthalate (PET) substrates. The interaction of CQD absorption with photonic crystal Fano
resonances is presented both analytically and experimentally for use in wavelength selective sensors.
Nano-photonics: past and present
Show abstract
Nanotech is at the scale of 10-9 meters, located at the mesocopic transition phase, which can take
both classical mechanics (CM) and quantum mechanics (QM) descriptions bridging ten orders of magnitude
phenomena, between the microscopic world of a single atom at 10-10 meters with the macroscopic world at
meters. However, QM principles aid the understanding of any unusual property at the nanotech level. The
other major difference between nano-photonics and other forms of optics is that the nano-scale is not very
'hands on'. For the most part, we will not be able to see the components with our naked eyes, but will be
required to use some nanotech imaging tools, as follows:
The role of extensive variables in nanoscience
Show abstract
We discuss the principle of superposition relative to the definition of extensivity in thermodynamics and discuss
how it can be extended to systems that do not obey the linear principle of superposition. This leads to the concept
of generalized superposition, of which, there are multiple types defined in the paper. Generalized superposition
can be used to explore the breakdown of linearity at the nano-scale, which leads to deeper understanding of
nano-materials and nano-thermodynamics.
Smart Sensor Systems and Miniaturization
Watermarking strategies for IP protection of micro-processor cores
Show abstract
Reuse-based design has emerged as one of the most important methodologies for integrated circuit design, with reusable
Intellectual Property (IP) cores enabling the optimization of company resources due to reduced development time and
costs. This is of special interest in the Field-Programmable Logic (FPL) domain, which mainly relies on automatic
synthesis tools. However, this design methodology has brought to light the intellectual property protection (IPP) of those
modules, with most forms of protection in the EDA industry being difficult to translate to this domain. However, IP core
watermarking has emerged as a tool for IP core protection. Although watermarks may be inserted at different levels of
the design flow, watermarking Hardware Description Language (HDL) descriptions has been proved to be a robust and
secure option. In this paper, a new framework for the protection of μP cores is presented. The protection scheme is
derived from the IPP@HDL procedure and it has been adapted to the singularities of μP cores, overcoming the problems
for the digital signature extraction in such systems. Additionally, the feature of hardware activation has been introduced,
allowing the distribution of μP cores in a "demo" mode and a later activation that can be easily performed by the
customer executing a simple program. Application examples show that the additional hardware introduced for protection
and/or activation has no effect over the performance, and showing an assumable area increase.
Nios II hardware acceleration of the epsilon quadratic sieve algorithm
Show abstract
The quadratic sieve (QS) algorithm is one of the most powerful algorithms to factor large composite primes used to
break RSA cryptographic systems. The hardware structure of the QS algorithm seems to be a good fit for FPGA
acceleration. Our new ε-QS algorithm further simplifies the hardware architecture making it an even better candidate for
C2H acceleration. This paper shows our design results in FPGA resource and performance when implementing very long
arithmetic on the Nios microprocessor platform with C2H acceleration for different libraries (GMP, LIP, FLINT, NRMP)
and QS architecture choices for factoring 32-2048 bit RSA numbers.
Optimization of a hardware implementation for pulse coupled neural networks for image applications
Show abstract
Pulse Coupled Neural Networks are a very useful tool for image processing and visual applications, since it has the
advantages of being invariant to image changes as rotation, scale, or certain distortion. Among other characteristics, the
PCNN changes a given image input into a temporal representation which can be easily later analyzed for pattern
recognition. The structure of a PCNN though, makes it necessary to determine all of its parameters very carefully in
order to function optimally, so that the responses to the kind of inputs it will be subjected are clearly discriminated
allowing for an easy and fast post-processing yielding useful results. This tweaking of the system is a taxing process.
In this paper we analyze and compare two methods for modeling PCNNs. A purely mathematical model is programmed
and a similar circuital model is also designed. Both are then used to determine the optimal values of the several
parameters of a PCNN: gain, threshold, time constants for feed-in and threshold and linking leading to an optimal design
for image recognition. The results are compared for usefulness, accuracy and speed, as well as the performance and time
requirements for fast and easy design, thus providing a tool for future ease of management of a PCNN for different tasks.
Design and simulation of optoelectronic complementary dual neural elements for realizing a family of normalized vector 'equivalence-nonequivalence' operations
Show abstract
Equivalence models (EM) advantages of neural networks (NN) are shown in paper. EMs are based on vectormatrix
procedures with basic operations of continuous neurologic: normalized vector operations "equivalence",
"nonequivalence", "autoequivalence", "autononequivalence". The capacity of NN on the basis of EM and of its
modifications, including auto-and heteroassociative memories for 2D images, exceeds in several times quantity of
neurons. Such neuroparadigms are very perspective for processing, recognition, storing large size and strongly
correlated images. A family of "normalized equivalence-nonequivalence" neuro-fuzzy logic operations on the based of
generalized operations fuzzy-negation, t-norm and s-norm is elaborated. A biologically motivated concept and time
pulse encoding principles of continuous logic photocurrent reflexions and sample-storage devices with pulse-width
photoconverters have allowed us to design generalized structures for realization of the family of normalized linear
vector operations "equivalence"-"nonequivalence". Simulation results show, that processing time in such circuits does
not exceed units of micro seconds. Circuits are simple, have low supply voltage (1-3 V), low power consumption
(milliwatts), low levels of input signals (microwatts), integrated construction, satisfy the problem of interconnections
and cascading.
Biomedical Wellness Award for Applying Computational Intelligence to Image Diagnosis
Medical diagnosis imaging systems: image and signal processing applications aided by fuzzy logic
Show abstract
First, we describe an automated procedure for segmenting an MR image of a human brain based on fuzzy logic for
diagnosing Alzheimer's disease. The intensity thresholds for segmenting the whole brain of a subject are automatically
determined by finding the peaks of the intensity histogram. After these thresholds are evaluated in a region growing, the
whole brain can be identified. Next, we describe a procedure for decomposing the obtained whole brain into the left and
right cerebral hemispheres, the cerebellum and the brain stem. Our method then identified the whole brain, the left
cerebral hemisphere, the right cerebral hemisphere, the cerebellum and the brain stem. Secondly, we describe a transskull
sonography system that can visualize the shape of the skull and brain surface from any point to examine skull
fracture and some brain diseases. We employ fuzzy signal processing to determine the skull and brain surface. The
phantom model, the animal model with soft tissue, the animal model with brain tissue, and a human subjects' forehead is
applied in our system. The all shapes of the skin surface, skull surface, skull bottom, and brain tissue surface are
successfully determined.
Biomedical Wellness Applications
Optical determination of cardiovascular health at a distance
Show abstract
Although contemporary, contact methods of measuring cardiovascular health are accurate and applicable, a noncontact
optical sensor that detects these same parameters of health and eliminates the inconvenience of patient
contact would be useful to the medical community. Techniques of mapping and imaging blood flow with laser
speckle contrast imaging have shown promise as a non-contact health sensor. This paper explores using a laser
speckle detector to detect blood pressure, pulse pressure waves, and pulse wave velocity at a standoff. The laser
speckle detector was able to detect pulse pressure waves and with further development, may be able to measure
pulse wave velocity and blood pressure.
Full-field, nonscanning, optical imaging for perfusion indication
Show abstract
Laser speckle imaging (LSI) has been gaining popularity for the past few years. Like other optical imaging modalities such
as optical coherence tomography (OCT), orthogonal polarization spectroscopy (OPS), and laser Doppler imaging (LDI), LSI
utilizes nonionizing radiation. In LSI, blood flow velocity is obtained by analyzing, temporally or spatially, laser speckle (LS)
patterns generated when an expanded laser beam illuminates the tissue. The advantages of LSI are that it is fast, does not
require scanning, and provides full-field LS images to extract realtime, quantitative hemodynamic information of subtle
changes in the tissue vasculature. For medical applications, LSI has been used for obtaining blood velocities in human retina,
skin flaps, wounds, and cerebral and sublingual areas. When coupled with optical fibers, LSI can be used for endoscopic
measurements for a variety of applications. This paper describes the application of LSI in retinal, sublingual, and skin flap
measurements. Evaluation of retinal hemodynamics provides very important diagnostic information, since the human retina
offers direct optical access to both the central nervous system (CNS) and afferent and efferent CNS vasculature. The
performance of an LSI-based fundus imager for measuring retinal hemodynamics is presented. Sublingual microcirculation
may have utility for sepsis indication, since inherent in organ injury caused by sepsis is a profound change in microvascular
hemodynamics. Sublingual measurement results using an LSI scope are reported. A wound imager for imaging LS patterns
of wounds and skin flaps is described, and results are presented.
Can we detect influenza?
Show abstract
This paper will give background information on the structure of different influenza viruses and address the remote
detection of viral particles using Surface Enhanced Raman Spectroscopy (SERS). Also, we will mathematically predict
how small to create the silver nanorods, measured by the diameter of the rod, in order for there to be a discernable
enhancement in the Raman signal due to quantum properties of the rods. Finally, the future of nanotechnology in optics
as it relates to medical applications will be addressed, highlighting a few of the most important possible future
applications.
Wellness Smart Sensors
Computer-aided diagnosis and lipidomics analysis to detect and treat breast cancer
Show abstract
Multi-modality diagnosis techniques are more and more replacing traditional medical imaging for breast cancer
detection. Newly emerging advances in both intelligent cancer detection systems and lipidomics technologies
offer an excellent opportunity to detect tumors and to understand regulation at the molecular level in many
diseases such as cancer. In this paper, we present a detailed computer-aided diagnosis (CAD) systems combining
motion artefact reduction and automated feature extraction and classification, and a novel data mining approach
for visualization of gene therapy leading to apoptosis in U87 MG glioblastoma cells, a secondary tumor of breast
cancer. The achieved results show that the CAD system represents a robust and integrative tool for reliable
small contrast enhancing lesions. Graph-clustering methods are introduced as powerful correlation networks
which enable a simultaneous exploration and visualization of co-regulation in glioblastoma data. These new
paradigms are providing unique "fingerprints" by revealing how the intricate interactions at the lipidome level
can be employed to induce apoptosis (cell death) and are thus opening a new window to biomedical frontiers.
Denoising of x-ray imagery with spatially varying estimates of noise variance
Show abstract
We described a way to use a block-matching 3-D denoising algorithm to reduce noise in x-ray imagery. We
first filtered an image multiple times using different estimates of the noise variance. From a simple estimate of the
denoised image, we then estimated the noise variance at each pixel of the image. Using this approach, we obtained
improved results when compared to using a single value of estimate for the noise variance. Even a small number of
quantization levels of the estimates of the noise showed improved results.
Comparative analysis of filtered back-projection algorithms for optoacoustic imaging
Show abstract
Various types of cancer remain the second leading cause of death in the world. As a consequence, the detection of these
tumors has a vital importance. Optoacoustic imaging (OA), a novel imaging technique, offers high contrast and
resolution to detect them by measuring the pressure waves generated by tissues exposed to optical energy. Several
algorithms, based on Back Projection (BP) techniques, have been suggested to process OA images in conjunction with
signal filtering. In this paper, we compare several BP techniques in combination with different classes of filtering. We
apply these techniques first directly to a numerical generated sample image and then to the laser-digitalized image of a
tissue phantom, obtaining in both cases the best results in resolution and contrast for a wavelet-based filter.
System Biology Pioneer Award
The hidden impact of inter-individual genomic variations on cellular function
Constantin Georgescu,
Hamid Bolouri
Show abstract
An analysis of the degree of genomic variation between two individual genomes suggests that there may be
considerable biochemical differences among individuals. Examination of DNA sequence variations in 14
canonical signaling pathways and Monte-Carlo simulation modeling suggest that the kinetic and
quantitative behavior of signaling pathways in many individuals may be significantly perturbed from the
'healthy' norm.
Signal transduction pathways in some individuals may suffer context-specific failures, or they may function
normally but fail easily in the face of additional environmental perturbations or somatic mutations. These
findings argue for new systems biology approaches that can predict pathway status in individuals using
personal genome sequences and biomarker data.
System Biology
Novel systems biology and computational methods for lipidomics
Show abstract
The analysis and interpretation of large lipidomic data sets requires the development of new dynamical systems,
data mining and visualization approaches. Traditional techniques are insufficient to study corregulations and
stochastic fluctuations observed in lipidomic networks and resulting experimental data. The emphasis of this
paper lies in the presentation of novel approaches for the dynamical analysis and projection representation.
Different paradigms describing kinetic models and providing context-based information are described and at
the same time their interrelations are revealed. These qualitative and quantitative methods are applied to the
lipidomic analysis of U87 MG glioblastoma cells. The achieved provide a more detailed insight into the data
structure of the lipidomic system.
Variable patch sizes for normalized cross correlation in image pairs
Show abstract
Image registration plays an important role in many image understanding applications, such as stereo vision for depth
recovery and biometric recognition. Correlation-based matches are often used in such registration applications. The
performance of correlation-based matching algorithms such as the Normalized Cross Correlation (NCC) depends on the
image patch size used in the computation. We present an algorithm that adapts the patch size so that the patch is
distinguishable from other patches from the same image. The new method seeks the best balance point between
matching performance and computational cost in NCC computation. Experimental results demonstrate the performance of our method.
A subspace learning approach to evaluating the performance of image fusion algorithms
Show abstract
The fusion of multi-spectral images is an important pre-processing operation for scientists and engineers
seeking to design robust detection, recognition and identification (DRI) systems. Due to the multitude of pixellevel
fusion algorithms available, there is an extreme need for reliable metrics to analyze their performance. Most
recently, subspace learning methods have been applied to the field of information fusion for object recognition
and classification. This paper aims to extend the capabilities of existing nonlinear dimensionality reduction
algorithms to a new area, evaluating the performance of image fusion algorithms. We prove that distances
between points in the low dimensional embedding are essentially equivalent to the results given by estimating the
amount of information transfered from source images to resultant fused images (normalized mutual information).
Smart Sensors Applications
Recognizing foreground-background interaction
Show abstract
Can the background affect a foreground target in distant, low-quality imagery? If it does, it might occur in our mind, or perhaps
it may represent a snapshot of our early vision. An affirmative answer, one way or another, may affect our current understanding
of this phenomena and potentially for related applications. How can we be sure about this in the psycho-physical sense? We
begin with the physiology of our brain's homeostasis, of which an isothermal equilibrium is characterized by the minimum of
Helmholtz isothermal Free Energy: A = U - T0S ≥ 0, where T0 = 37°C, the Boltzmann Entropy S = KB1n(W), and U is the
unknown internal energy to be computed.
Study on the technique of distinguishing rock from coal based on statistical analysis of fast Fourier transform
Show abstract
An algorithm of distinguishing rock from coal based on statistical analysis of Fast Fourier Transform (FFT) is
presented which can be used in the mechanized caving coal locales. First, eight groups of sound signals sampled
with the speed 8192 samples/sec during caving are transformed by FFT. Second, the FFT results are analyzed and
the ratios of the low frequency energy to the high frequency energy(ER) defined in the FFT results are calculated by
using the variance analytical method (Var). Third, the typical values of the sound of the coal bumping the
transporting coal armor plate, the rock bumping the armor plate and the mixing of coal and rock bumping the armor
plate are calculated with the variances and the ratios(EV= ER *Var). Finally, the threshold of distinguishing rock
from coal is evaluated by using the typical values and used to direct the opportunity for caving. We can learn by the
experimental results that the proposed technique can depict effectively the different characteristics of the sampled
signals. The experimental results also show that we can distinguish effectively different bumping sounds of coal,
rock and the mixing of them by the characteristics when adjusting the threshold value. Therefore the algorithm can
be used to improve the miners' productivity and promote the construction of digital mine.
Evaluation model for the implementation results of mine law based on neural network
Show abstract
To evaluate the implementation results of mine safety production law, the evaluation model based on neural network is
presented. In this model, 63 indicators which can describe the mine law effectively are proposed. The evaluation system
is developed by using the model and the 63 indicators. The evaluation results show that the proposed method has high
accuracy. We can effectively estimate the score of one mine for its carrying out the safety law. The estimate results are of
scientific credibility and impartiality.
Biomimetic novelty detection
Show abstract
In the crowded rain forest, how do animals locate camouflaged prey that resemble the environment, such as walking sticks? Birds will observe the suspected stick over a period of time to determine if its behavior matches that of a tree. If the stick's behavior exceeds normal tree behavior, such as moving faster than the other branches, the bird will determine that it is actually a walking stick and not a tree branch. Once this determination is made, they will prey on the insect. Studying this natural process of novelty detection, present in a variety of animals from birds to turkey vultures, can be beneficial for numerous human applications. The spatiotemporal novelty detector will be developed using self-referencing matched filters to go beyond auto-regression. This algorithm is more than the usual change detector for it will detect a novelty behavior that exceeds its predicted value based on past data, e.g. the bird who identifies a walking stick in the forest. A scalar model is presented; however, the application can be expanded to multiple modalities for more detailed applications. Some of these applications include gas pipeline leak detection, persistent surveillance, and the creation of a smart sensor web.