Proceedings Volume 9871

Sensing and Analysis Technologies for Biomedical and Cognitive Applications 2016

Liyi Dai, Yufeng Zheng, Henry Chu, et al.
cover
Proceedings Volume 9871

Sensing and Analysis Technologies for Biomedical and Cognitive Applications 2016

Liyi Dai, Yufeng Zheng, Henry Chu, et al.
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 6 October 2016
Contents: 5 Sessions, 27 Papers, 0 Presentations
Conference: SPIE Commercial + Scientific Sensing and Imaging 2016
Volume Number: 9871

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 9871
  • Biomedical Wellness Applications
  • Smart Sensor Systems and Applications
  • Learning Theory and Applications
  • Large Data Analysis
Front Matter: Volume 9871
icon_mobile_dropdown
Front Matter: Volume 9871
This PDF file contains the front matter associated with SPIE Proceedings Volume 987, including the Title Page, Copyright information, Table of Contents, and Conference Committee listing.
Biomedical Wellness Applications
icon_mobile_dropdown
Deconstructing and constructing innate immune functions using molecular sensors and actuators
Kester Coutinho, Takanari Inoue
White blood cells such as neutrophils and macrophages are made competent for chemotaxis and phagocytosis — the dynamic cellular behaviors that are hallmarks of their innate immune functions — by the reorganization of complex biological circuits during differentiation. Conventional loss-of-function approaches have revealed that more than 100 genes participate in these cellular functions, and we have begun to understand the intricate signaling circuits that are built up from these gene products. We now appreciate: (1) that these circuits come in a variety of flavors — so that we can make a distinction between genetic circuits, metabolic circuits and signaling circuits; and (2) that they are usually so complex that the assumption of multiple feedback loops, as well as that of crosstalk between seemingly independent pathways, is now routine. It has not escaped our notice, however, that just as physicists and electrical engineers have long been able to disentangle complex electric circuits simply by repetitive cycles of probing and measuring electric currents using a voltmeter, we might similarly be able to dissect these intricate biological circuits by incorporating equivalent approaches in the fields of cell biology and bioengineering. Existing techniques in biology for probing individual circuit components are unfortunately lacking, so that the overarching goal of drawing an exact circuit diagram for the whole cell — complete with kinetic parameters for connections between individual circuit components — is not yet in near sight. My laboratory and others have thus begun the development of a new series of molecular tools that can measurably investigate the circuit connectivity inside living cells, as if we were doing so on a silicon board. In these proceedings, I will introduce some of these techniques, provide examples of their implementation, and offer a perspective on directions moving forward.
Remote heartbeat signal detection from visible spectrum recordings based on blind deconvolution
Balvinder Kaur, Sophia Moses, Megha Luthra, et al.
While recent advances have shown that it is possible to acquire a signal equivalent to the heartbeat from visual spectrum video recordings of the human skin, extracting the heartbeat’s exact timing information from it, for the purpose of heart rate variability analysis, remains a challenge. In this paper, we explore two novel methods to estimate the remote cardiac signal peak positions, aiming at a close representation of the R-peaks of the ECG signal. The first method is based on curve fitting (CF) using a modified filtered least mean square (LMS) optimization and the second method is based on system estimation using blind deconvolution (BDC). To prove the efficacy of the developed algorithms, we compared results obtained with the ground truth (ECG) signal. Both methods achieved a low relative error between the peaks of the two signals. This work, performed under an IRB approved protocol, provides initial proof that blind deconvolution techniques can be used to estimate timing information of the cardiac signal closely correlated to the one obtained by traditional ECG. The results show promise for further development of a remote sensing of cardiac signals for the purpose of remote vital sign and stress detection for medical, security, military and civilian applications.
Robustness of remote stress detection from visible spectrum recordings
Balvinder Kaur, Sophia Moses, Megha Luthra, et al.
In our recent work, we have shown that it is possible to extract high fidelity timing information of the cardiac pulse wave from visible spectrum videos, which can then be used as a basis for stress detection. In that approach, we used both heart rate variability (HRV) metrics and the differential pulse transit time (dPTT) as indicators of the presence of stress. One of the main concerns in this analysis is its robustness in the presence of noise, as the remotely acquired signal that we call blood wave (BW) signal is degraded with respect to the signal acquired using contact sensors. In this work, we discuss the robustness of our metrics in the presence of multiplicative noise. Specifically, we study the effects of subtle motion due to respiration and changes in illumination levels due to light flickering on the BW signal, the HRV-driven features, and the dPTT. Our sensitivity study involved both Monte Carlo simulations and experimental data from human facial videos, and indicates that our metrics are robust even under moderate amounts of noise. Generated results will help the remote stress detection community with developing requirements for visual spectrum based stress detection systems.
Custom instruction for NIOS II processor FFT implementation for image processing
Sindhuja Sundararajana, Uwe Meyer-Baese, Guillermo Botella
Image processing can be considered as signal processing in two dimensions (2D). Filtering is one of the basic image processing operation. Filtering in frequency domain is computationally faster when compared to the corresponding spatial domain operation as the complex convolution process is modified as multiplication in frequency domain. The popular 2D transforms used in image processing are Fast Fourier Transform (FFT), Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). The common values for resolution of an image are 640x480, 800x600, 1024x768 and 1280x1024. As it can be seen, the image formats are generally not a power of 2. So power of 2 FFT lengths are not required and these cannot be built using shorter Discrete Fourier Transform (DFT) blocks. Split radix based FFT algorithms like Good-Thomas FFT algorithm simplifies the implementation logic required for such applications and hence can be implemented in low area and power consumption and also meet the timing constraints thereby operating at high frequency. The Good-Thomas FFT algorithm which is a Prime Factor FFT algorithm (PFA) provides the means of computing DFT with least number of multiplication and addition operations. We will be providing an Altera FPGA based NIOS II custom instruction implementation of Good-Thomas FFT algorithm to improve the system performance and also provide the comparison when the same algorithm is completely implemented in software.
Real-time fetal ECG system design using embedded microprocessors
Uwe Meyer-Baese, Harikrishna Muddu, Sebastian Schinhaerl, et al.
The emphasis of this project lies in the development and evaluation of new robust and high fidelity fetal electrocardiogram (FECG) systems to determine the fetal heart rate (FHR). Recently several powerful algorithms have been suggested to improve the FECG fidelity. Until now it is unknown if these algorithms allow a real-time processing, can be used in mobile systems (low power), and which algorithm produces the best error rate for a given system configuration. In this work we have developed high performance, low power microprocessor-based biomedical systems that allow a fair comparison of proposed, state-of-the-art FECG algorithms. We will evaluate different soft-core microprocessors and compare these solutions to other commercial off-the-shelf (COTS) hardcore solutions in terms of price, size, power, and speed.
Predicting healthcare associated infections using patients' experiences
Michael A. Pratt, Henry Chu
Healthcare associated infections (HAI) are a major threat to patient safety and are costly to health systems. Our goal is to predict the HAI performance of a hospital using the patients' experience responses as input. We use four classifiers, viz. random forest, naive Bayes, artificial feedforward neural networks, and the support vector machine, to perform the prediction of six types of HAI. The six types include blood stream, urinary tract, surgical site, and intestinal infections. Experiments show that the random forest and support vector machine perform well across the six types of HAI.
Smart Sensor Systems and Applications
icon_mobile_dropdown
Acoustic angiography: a new high frequency contrast ultrasound technique for biomedical imaging
Sarah E. Shelton, Brooks D. Lindsey, Ryan Gessner, et al.
Acoustic Angiography is a new approach to high-resolution contrast enhanced ultrasound imaging enabled by ultra-broadband transducer designs. The high frequency imaging technique provides signal separation from tissue which does not produce significant harmonics in the same frequency range, as well as high resolution. This approach enables imaging of microvasculature in-vivo with high resolution and signal to noise, producing images that resemble x-ray angiography. Data shows that acoustic angiography can provide important information about the presence of disease based on vascular patterns, and may enable a new paradigm in medical imaging.
Medical image reconstruction algorithm based on the geometric information between sensor detector and ROI
In this paper, we propose a new image reconstruction algorithm considering the geometric information of acoustic sources and senor detector and review the two-step reconstruction algorithm which was previously proposed based on the geometrical information of ROI(region of interest) considering the finite size of acoustic sensor element. In a new image reconstruction algorithm, not only mathematical analysis is very simple but also its software implementation is very easy because we don't need to use the FFT. We verify the effectiveness of the proposed reconstruction algorithm by showing the simulation results by using Matlab k-wave toolkit.
HEVC optimizations for medical environments
D. G. Fernández, A. A. Del Barrio, Guillermo Botella, et al.
HEVC/H.265 is the most interesting and cutting-edge topic in the world of digital video compression, allowing to reduce by half the required bandwidth in comparison with the previous H.264 standard. Telemedicine services and in general any medical video application can benefit from the video encoding advances. However, the HEVC is computationally expensive to implement. In this paper a method for reducing the HEVC complexity in the medical environment is proposed. The sequences that are typically processed in this context contain several homogeneous regions. Leveraging these regions, it is possible to simplify the HEVC flow while maintaining a high-level quality. In comparison with the HM16.2 standard, the encoding time is reduced up to 75%, with a negligible quality loss. Moreover, the algorithm is straightforward to implement in any hardware platform.
Image quality (IQ) guided multispectral image compression
Yufeng Zheng, Genshe Chen, Zhonghai Wang, et al.
Image compression is necessary for data transportation, which saves both transferring time and storage space. In this paper, we focus on our discussion on lossy compression. There are many standard image formats and corresponding compression algorithms, for examples, JPEG (DCT — discrete cosine transform), JPEG 2000 (DWT — discrete wavelet transform), BPG (better portable graphics) and TIFF (LZW — Lempel-Ziv-Welch). The image quality (IQ) of decompressed image will be measured by numerical metrics such as root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural Similarity (SSIM) Index. Given an image and a specified IQ, we will investigate how to select a compression method and its parameters to achieve an expected compression. Our scenario consists of 3 steps. The first step is to compress a set of interested images by varying parameters and compute their IQs for each compression method. The second step is to create several regression models per compression method after analyzing the IQ-measurement versus compression-parameter from a number of compressed images. The third step is to compress the given image with the specified IQ using the selected compression method (JPEG, JPEG2000, BPG, or TIFF) according to the regressed models. The IQ may be specified by a compression ratio (e.g., 100), then we will select the compression method of the highest IQ (SSIM, or PSNR). Or the IQ may be specified by a IQ metric (e.g., SSIM = 0.8, or PSNR = 50), then we will select the compression method of the highest compression ratio. Our experiments tested on thermal (long-wave infrared) images (in gray scales) showed very promising results.
Early breast cancer detection with digital mammograms using Haar-like features and AdaBoost algorithm
Yufeng Zheng, Clifford Yang, Alex Merkulov, et al.
The current computer-aided detection (CAD) methods are not sufficiently accurate in detecting masses, especially in dense breasts and/or small masses (typically at their early stages). A small mass may not be perceived when it is small and/or homogeneous with surrounding tissues. Possible reasons for the limited performance of existing CAD methods are lack of multiscale analysis and unification of variant masses. The speed of CAD analysis is important for field applications. We propose a new CAD model for mass detection, which extracts simple Haar-like features for fast detection, uses AdaBoost approach for feature selection and classifier training, applies cascading classifiers for reduction of false positives, and utilizes multiscale detection for variant sizes of masses. In addition to Haar features, local binary pattern (LBP) and histograms of oriented gradient (HOG) are extracted and applied to mass detection. The performance of a CAD system can be measured with true positive rate (TPR) and false positives per image (FPI). We are collecting our own digital mammograms for the proposed research. The proposed CAD model will be initially demonstrated with mass detection including architecture distortion.
Multispectral image fusion for vehicle identification and threat analysis
Unauthorized vehicles become an increasing threat to US facilities and locations especially overseas. Vehicle detection is a well-studied area. However, vehicle identification and intension analysis have not been sufficiently investigated. We propose to use multispectral (visible, thermal) images (1) to match the vehicle types with the registered (or authorized) vehicle types; (2) to analyze the vehicle moving patterns, (3) and study methods to utilize open information such as GPS and traffic information. When a vehicle is either permitted to access to the facility, or subjected to further manual inspection (scrutiny), the additional information (e.g., text) can be compared against the imagery features. We use information fusion (at image, feature, and score level) and neural network to increase vehicle matching accuracy. For the vehicle moving patterns, we will classify them as “normal” and “abnormal” by using driving speed, acceleration, stop, zig-zag, etc. The methods would support directions in physical and human-based sensor fusion, patterns of life (POL) analysis, and contextual-enhanced information fusion.
Computer-aided diagnosis of diagnostically challenging lesions in breast MRI: a comparison between a radiomics and a feature-selective approach
Sebastian Hoffmann, Marc Lobbes, Ivo Houben, et al.
Diagnostically challenging lesions pose a challenge both for the radiological reading and also for current CAD systems. They are not well-defined in both morphology (geometric shape) and kinetics (temporal enhancement) and pose a problem to lesion detection and classification. Their strong phenotypic differences can be visualized by MRI. Radiomics represents a novel approach to achieve a detailed quantification of the tumour phenotypes by analyzing a large number of image descriptors. In this paper, we apply a quantitative radiomics approach based on shape, texture and kinetics tumor features and evaluate it in comparison to a reduced-order feature approach in a computer-aided diagnosis system applied to diagnostically challenging lesions.
Learning Theory and Applications
icon_mobile_dropdown
Convergence rates of finite difference stochastic approximation algorithms part II: implementation via common random numbers
Liyi Dai
Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, by approximating gradient using finite differences generated through common random numbers. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences. Particularly, it is shown that the rate can be increased to n−2/5 in general and to n−1/2, the best possible rate of stochastic approximation, in Monte Carlo optimization for a broad class of problems, in the iteration number n.
Selection of principal components based on Fisher discriminant ratio
Xiangyan Zeng, Masoud Naghedolfeizi, Sanjeev Arora, et al.
Principal component analysis transforms a set of possibly correlated variables into uncorrelated variables, and is widely used as a technique of dimensionality reduction and feature extraction. In some applications of dimensionality reduction, the objective is to use a small number of principal components to represent most variation in the data. On the other hand, the main purpose of feature extraction is to facilitate subsequent pattern recognition and machine learning tasks, such as classification. Selecting principal components for classification tasks aims for more than dimensionality reduction. The capability of distinguishing different classes is another major concern. Components that have larger eigenvalues do not necessarily have better distinguishing capabilities. In this paper, we investigate a strategy of selecting principal components based on the Fisher discriminant ratio. The ratio of between class variance to within class variance is calculated for each component, based on which the principal components are selected. The number of relevant components is determined by the classification accuracy. To alleviate overfitting which is common when there are few training data available, we use a cross-validation procedure to determine the number of principal components. The main objective is to select the components that have large Fisher discriminant ratios so that adequate class separability is obtained. The number of selected components is determined by the classification accuracy of the validation data. The selection method is evaluated by face recognition experiments.
Convergence rates of finite difference stochastic approximation algorithms part I: general sampling
Liyi Dai
Stochastic optimization is a fundamental problem that finds applications in many areas including biological and cognitive sciences. The classical stochastic approximation algorithm for iterative stochastic optimization requires gradient information of the sample object function that is typically difficult to obtain in practice. Recently there has been renewed interests in derivative free approaches to stochastic optimization. In this paper, we examine the rates of convergence for the Kiefer-Wolfowitz algorithm and the mirror descent algorithm, under various updating schemes using finite differences as gradient approximations. The analysis is carried out under a general framework covering a wide range of updating scenarios. It is shown that the convergence of these algorithms can be accelerated by controlling the implementation of the finite differences.
Artificial neural networks (ANNS) versus partial least squares (PLS) for spectral interference correction for taking part of the lab to the sample types of applications: an experimental study
Z. Li, Vassili Karanassios
Interference and in particular spectral interference is a well documented problem in optical emission spectrometry. For example, it is commonly encountered even when commercially-available spectrometers with medium to high resolution are used (e.g., those with focal lengths of 0.75 m to 1 m). Such interference must be corrected. Although portable spectrometers are better suited for "taking part of the lab to the sample" types of applications, the effects of interference become more pronounced due to the short focal length of such spectrometers (e.g., 10 cm to 15 cm). We describe use of Artificial Neural Networks (ANNs) and of Partial Least Squares (PLS) methods for spectral interference correction.
Role of diversity in ICA and IVA: theory and applications
Tülay Adalı
Independent component analysis (ICA) has been the most popular approach for solving the blind source separation problem. Starting from a simple linear mixing model and the assumption of statistical independence, ICA can recover a set of linearly-mixed sources to within a scaling and permutation ambiguity. It has been successfully applied to numerous data analysis problems in areas as diverse as biomedicine, communications, finance, geo- physics, and remote sensing. ICA can be achieved using different types of diversity—statistical property—and, can be posed to simultaneously account for multiple types of diversity such as higher-order-statistics, sample dependence, non-circularity, and nonstationarity. A recent generalization of ICA, independent vector analysis (IVA), generalizes ICA to multiple data sets and adds the use of one more type of diversity, statistical dependence across the data sets, for jointly achieving independent decomposition of multiple data sets. With the addition of each new diversity type, identification of a broader class of signals become possible, and in the case of IVA, this includes sources that are independent and identically distributed Gaussians. We review the fundamentals and properties of ICA and IVA when multiple types of diversity are taken into account, and then ask the question whether diversity plays an important role in practical applications as well. Examples from various domains are presented to demonstrate that in many scenarios it might be worthwhile to jointly account for multiple statistical properties. This paper is submitted in conjunction with the talk delivered for the “Unsupervised Learning and ICA Pioneer Award” at the 2016 SPIE Conference on Sensing and Analysis Technologies for Biomedical and Cognitive Applications.
Pre-trained D-CNN models for detecting complex events in unconstrained videos
Rapid event detection faces an emergent need to process large videos collections; whether surveillance videos or unconstrained web videos, the ability to automatically recognize high-level, complex events is a challenging task. Motivated by pre-existing methods being complex, computationally demanding, and often non-replicable, we designed a simple system that is quick, effective and carries minimal overhead in terms of memory and storage. Our system is clearly described, modular in nature, replicable on any Desktop, and demonstrated with extensive experiments, backed by insightful analysis on different Convolutional Neural Networks (CNNs), as stand-alone and fused with others. With a large corpus of unconstrained, real-world video data, we examine the usefulness of different CNN models as features extractors for modeling high-level events, i.e., pre-trained CNNs that differ in architectures, training data, and number of outputs. For each CNN, we use 1-fps from all training exemplar to train one-vs-rest SVMs for each event. To represent videos, frame-level features were fused using a variety of techniques. The best being to max-pool between predetermined shot boundaries, then average-pool to form the final video-level descriptor. Through extensive analysis, several insights were found on using pre-trained CNNs as off-the-shelf feature extractors for the task of event detection. Fusing SVMs of different CNNs revealed some interesting facts, finding some combinations to be complimentary. It was concluded that no single CNN works best for all events, as some events are more object-driven while others are more scene-based. Our top performance resulted from learning event-dependent weights for different CNNs.
Independent component analysis decomposition of hospital emergency department throughput measures
We present a method adapted from medical sensor data analysis, viz. independent component analysis of electroencephalography data, to health system analysis. Timely and effective care in a hospital emergency department is measured by throughput measures such as median times patients spent before they were admitted as an inpatient, before they were sent home, before they were seen by a healthcare professional. We consider a set of five such measures collected at 3,086 hospitals distributed across the U.S. One model of the performance of an emergency department is that these correlated throughput measures are linear combinations of some underlying sources. The independent component analysis decomposition of the data set can thus be viewed as transforming a set of performance measures collected at a site to a collection of outputs of spatial filters applied to the whole multi-measure data. We compare the independent component sources with the output of the conventional principal component analysis to show that the independent components are more suitable for understanding the data sets through visualizations.
Radon transform imaging: low-cost video compressive imaging at extreme resolutions
Aswin C. Sankaranarayanan, Jian Wang, Mohit Gupta
Most compressive imaging architectures rely on programmable light-modulators to obtain coded linear measurements of a signal. As a consequence, the properties of the light modulator place fundamental limits on the cost, performance, practicality, and capabilities of the compressive camera. For example, the spatial resolution of the single pixel camera is limited to that of its light modulator, which is seldom greater than 4 megapixels. In this paper, we describe a novel approach to compressive imaging that avoids the use of spatial light modulator. In its place, we use novel cylindrical optics and a rotation gantry to directly sample the Radon transform of the image focused on the sensor plane. We show that the reconstruction problem is identical to sparse tomographic recovery and we can leverage the vast literature in compressive magnetic resonance imaging (MRI) to good effect.

The proposed design has many important advantages over existing compressive cameras. First, we can achieve a resolution of N × N pixels using a sensor with N photodetectors; hence, with commercially available SWIR line-detectors with 10k pixels, we can potentially achieve spatial resolutions of 100 megapixels, a capability that is unprecedented. Second, our design is scalable more gracefully across wavebands of light since we only require sensors and optics that are optimized for the wavelengths of interest; in contrast, spatial light modulators like DMDs require expensive coatings to be effective in non-visible wavebands. Third, we can exploit properties of line-detectors including electronic shutters and pixels with large aspect ratios to optimize light throughput. On the ip side, a drawback of our approach is the need for moving components in the imaging architecture.
Large Data Analysis
icon_mobile_dropdown
Recent applications of liquid metals featuring nanoscale surface oxides
Taylor V. Neumann, Michael D. Dickey
This proceeding describes recent efforts from our group to control the shape and actuation of liquid metal. The liquid metal is an alloy of gallium and indium which is non-toxic, has negligible vapor pressure, and develops a thin, passivating surface oxide layer. The surface oxide allows the liquid metal to be patterned and shaped into structures that do not minimize interfacial energy. The surface oxide can be selectively removed by changes in pH or by applying a voltage. The surface oxide allows the liquid metal to be 3D printed to form free-standing structures. It also allows for the liquid metal to be injected into microfluidic channels and to maintain its shape within the channels. The selective removal of the oxide results in drastic changes in surface tension that can be used to control the flow behavior of the liquid metal. The metal can also wet thin, solid films of metal that accelerates droplets of the liquid along the metal traces .Here we discuss the properties and applications of liquid metal to make soft, reconfigurable electronics.
Analysis of geographical variations of healthcare providers performance using the empirical mode decomposition
Michael A. Pratt, Henry Chu
Performance of healthcare providers such as hospitals varies from one locale to another. Our goal is to study whether there is a geographical pattern of performance using metrics reported from over 3,000 hospitals distributed across the U.S. Empirical mode decomposition (EMD) is an effective analysis tool for nonlinear and non-stationary signals. It decomposes a data sequence into a series of intrinsic mode functions (IMFs) along with a residue sequence that represents the trend. Each IMF has zero local mean and has exactly one zero crossing between any two consecutive local extrema. An IMF can be used to assess the instantaneous frequency. Reconstruction of a signal using the residue and those IMFs of the lower frequency can reveal the underlying pattern of the signal without undue influence of the higher frequency fluctuations of the data. We used a space-filling curve to turn a set of performance metrics distributed irregularly across the two-dimensional planar surface into a one-dimensional sequence. The EMD decomposed a set of hospital emergency department median waiting times into 9 IMFs along with a residue. We used the residue and the lower frequency IMFs to reconstruct a sequence with fewer fluctuations. The sequence was transformed back to a two-dimensional map to reveal the geographical variations.
Proteomic data analysis of glioma cancer stem-cell lines based on novel nonlinear dimensional data reduction techniques
Sylvain Lespinats, Katja Pinker-Domenig, Georg Wengert, et al.
Glioma-derived cancer stem cells (GSCs) are tumor-initiating cells and may be refractory to radiation and chemotherapy and thus have important implications for tumor biology and therapeutics. The analysis and interpretation of large proteomic data sets requires the development of new data mining and visualization approaches. Traditional techniques are insufficient to interpret and visualize these resulting experimental data. The emphasis of this paper lies in the application of novel approaches for the visualization, clustering and projection representation to unveil hidden data structures relevant for the accurate interpretation of biological experiments. These qualitative and quantitative methods are applied to the proteomic analysis of data sets derived from the GSCs. The achieved clustering and visualization results provide a more detailed insight into the protein-level fold changes and putative upstream regulators for the GSCs. However the extracted molecular information is insufficient in classifying GSCs and paving the pathway to an improved therapeutics of the heterogeneous glioma.
Smart sensing surveillance video system
An intelligent video surveillance system is able to detect and identify abnormal and alarming situations by analyzing object movement. The Smart Sensing Surveillance Video (S3V) System is proposed to minimize video processing and transmission, thus allowing a fixed number of cameras to be connected on the system, and making it suitable for its applications in remote battlefield, tactical, and civilian applications including border surveillance, special force operations, airfield protection, perimeter and building protection, and etc. The S3V System would be more effective if equipped with visual understanding capabilities to detect, analyze, and recognize objects, track motions, and predict intentions. In addition, alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. The S3V System capabilities and technologies have great potential for both military and civilian applications, enabling highly effective security support tools for improving surveillance activities in densely crowded environments. It would be directly applicable to solutions for emergency response personnel, law enforcement, and other homeland security missions, as well as in applications requiring the interoperation of sensor networks with handheld or body-worn interface devices.
3D MEMS sensor for application on earthquakes early detection and Nowcast
Jerry Wu, Jing Liang, Harold Szu
This paper presents a 3D Microelectromechanical systems (MEMS) sensor system to quickly and reliably identify the precursors that precede every earthquake. When a precursor is detected and is expected to be followed by a major earthquake, the sensor system will analyze and determine the magnitude of the earthquake. This newly proposed 3D MEMS sensor can provide P-waves, S-waves, and surface waves along with timing measurements to a data processing unit. The out coming data is processed and filtered continuously by a set of proposed built-in programmable Digital Signal Process (DSP) filters in order to remove noise and other disturbances and determine an earthquake pattern. Our goal is to reliably initiate an alarm before the arrival of the destructive waves. Keywords: