Proceedings Volume 10572

13th International Conference on Medical Information Processing and Analysis

Eduardo Romero, Natasha Lepore, Jorge Brieva, et al.
cover
Proceedings Volume 10572

13th International Conference on Medical Information Processing and Analysis

Eduardo Romero, Natasha Lepore, Jorge Brieva, et al.
Purchase the printed version of this volume at proceedings.com or access the digital version at SPIE Digital Library.

Volume Details

Date Published: 29 November 2017
Contents: 16 Sessions, 56 Papers, 0 Presentations
Conference: 13th International Symposium on Medical Information Processing and Analysis 2017
Volume Number: 10572

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 10572
  • Digital Pathology
  • Other Modalities
  • Motion and Gait Analysis
  • Brain
  • Alzheimer's Disease
  • e-Health and Patient Empowerment
  • Fetal and Pediatric Brain Imaging
  • Signal Analysis
  • MRI Phantoms
  • Image Processing
  • Radiation Therapy Planning
  • Artificial Neural Network
  • Heart
  • Deep Learning and Deep Architectures
  • Medical Software Development
Front Matter: Volume 10572
icon_mobile_dropdown
Front Matter: Volume 10572
This PDF file contains the front matter associated with SPIE Proceedings Volume 10572, including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Digital Pathology
icon_mobile_dropdown
Quantifying expert diagnosis variability when grading tumor-infiltrating lymphocytes
Paula Toro, Germán Corredor, Xiangxue Wang, et al.
Tumor-infiltrating lymphocytes (TILs) have proved to play an important role in predicting prognosis, survival, and response to treatment in patients with a variety of solid tumors. Unfortunately, currently, there are not a standardized methodology to quantify the infiltration grade. The aim of this work is to evaluate variability among the reports of TILs given by a group of pathologists who examined a set of digitized Non-Small Cell Lung Cancer samples (n=60). 28 pathologists answered a different number of histopathological images. The agreement among pathologists was evaluated by computing the Kappa index coefficient and the standard deviation of their estimations. Furthermore, TILs reports were correlated with patient’s prognosis and survival using the Pearson’s correlation coefficient. General results show that the agreement among experts grading TILs in the dataset is low since Kappa values remain below 0.4 and the standard deviation values demonstrate that in none of the images there was a full consensus. Finally, the correlation coefficient for each pathologist also reveals a low association between the pathologists’ predictions and the prognosis/survival data. Results suggest the need of defining standardized, objective, and effective strategies to evaluate TILs, so they could be used as a biomarker in the daily routine.
A lymphocyte spatial distribution graph-based method for automated classification of recurrence risk on lung cancer images
Juan D. Garciá-Arteaga, Germán Corredor, Xiangxue Wang, et al.
Tumor-infiltrating lymphocytes occurs when various classes of white blood cells migrate from the blood stream towards the tumor, infiltrating it. The presence of TIL is predictive of the response of the patient to therapy. In this paper, we show how the automatic detection of lymphocytes in digital H and E histopathological images and the quantitative evaluation of the global lymphocyte configuration, evaluated through global features extracted from non-parametric graphs, constructed from the lymphocytes’ detected positions, can be correlated to the patient’s outcome in early-stage non-small cell lung cancer (NSCLC). The method was assessed on a tissue microarray cohort composed of 63 NSCLC cases. From the evaluated graphs, minimum spanning trees and K-nn showed the highest predictive ability, yielding F1 Scores of 0.75 and 0.72 and accuracies of 0.67 and 0.69, respectively. The predictive power of the proposed methodology indicates that graphs may be used to develop objective measures of the infiltration grade of tumors, which can, in turn, be used by pathologists to improve the decision making and treatment planning processes.
A sparse representation of the pathologist's interaction with whole slide images to improve the assigned relevance of regions of interest
Daniel Santiago, Germán Corredor, Eduardo Romero
During a diagnosis task, a Pathologist looks over a Whole Slide Image (WSI), aiming to find out relevant pathological patterns. Nonetheless, a virtual microscope captures these structures, but also other cellular patterns with different or none diagnostic meaning. Annotation of these images depends on manual delineation, which in practice becomes a hard task. This article contributes a new method for detecting relevant regions in WSI using the routine navigations in a virtual microscope. This method constructs a sparse representation or dictionary of each navigation path and determines the hidden relevance by maximizing the incoherence between several paths. The resulting dictionaries are then projected onto each other and relevant information is set to the dictionary atoms whose similarity is higher than a custom threshold. Evaluation was performed with 6 pathological images segmented from a skin biopsy already diagnosed with basal cell carcinoma (BCC). Results show that our proposal outperforms the baseline by more than 20%.
Scoring nuclear pleomorphism using a visual BoF modulated by a graph structure
Ricardo Moncayo-Martínez, David Romo-Bucheli, Viviana Arias, et al.
Nuclear pleomorphism has been recognized as a key histological criterium in breast cancer grading systems (such as Bloom Richardson and Nothingham grading systems). However, the nuclear pleomorphism assessment is subjective and presents high inter-reader variability. Automatic algorithms might facilitate quantitative estimation of nuclear variations in shape and size. Nevertheless, the automatic segmentation of the nuclei is difficult and still and open research problem. This paper presents a method using a bag of multi-scale visual features, modulated by a graph structure, to grade nuclei in breast cancer microscopical fields. This strategy constructs hematoxylin-eosin image patches, each containing a nucleus that is represented by a set of visual words in the BoF. The contribution of each visual word is computed by examining the visual words in an associated graph built when projecting the multi-dimensional BoF to a bi-dimensional plane where local relationships are conserved. The methodology was evaluated using 14 breast cancer cases of the Cancer Genome Atlas database. From these cases, a set of 134 microscopical fields was extracted, and under a leave-one-out validation scheme, an average F-score of 0.68 was obtained.
Other Modalities
icon_mobile_dropdown
Modelling and validation of diffuse reflectance of the adult human head for fNIRS: scalp sub-layers definition
Javier Herrera-Vega, Samuel Montero-Hernández, Ilias Tachtsidis, et al.
Accurate estimation of brain haemodynamics parameters such as cerebral blood flow and volume as well as oxygen consumption i.e. metabolic rate of oxygen, with funcional near infrared spectroscopy (fNIRS) requires precise characterization of light propagation through head tissues. An anatomically realistic forward model of the human adult head with unprecedented detailed specification of the 5 scalp sublayers to account for blood irrigation in the connective tissue layer is introduced. The full model consists of 9 layers, accounts for optical properties ranging from 750nm to 950nm and has a voxel size of 0.5mm. The whole model is validated comparing the predicted remitted spectra, using Monte Carlo simulations of radiation propagation with 108 photons, against continuous wave (CW) broadband fNIRS experimental data. As the true oxy- and deoxy-hemoglobin concentrations during acquisition are unknown, a genetic algorithm searched for the vector of parameters that generates a modelled spectrum that optimally fits the experimental spectrum. Differences between experimental and model predicted spectra was quantified using the Root mean square error (RMSE). RMSE was 0.071 ± 0.004, 0.108 ± 0.018 and 0.235±0.015 at 1, 2 and 3cm interoptode distance respectively. The parameter vector of absolute concentrations of haemoglobin species in scalp and cortex retrieved with the genetic algorithm was within histologically plausible ranges. The new model capability to estimate the contribution of the scalp blood flow shall permit incorporating this information to the regularization of the inverse problem for a cleaner reconstruction of brain hemodynamics.
Supporting the potential of quantitative ultrasonic techniques for the evaluation of platelet concentration
J. A. Villamarín, Y. M. Jiménez , L. Tatiana Molano, et al.
This article describes the results obtained by making use of a non-destructive, non-invasive ultrasonic system for the acoustic characterization of bovine plasma rich in platelets using digital signal processing techniques. This study includes computational methods based on acoustic spectrometry estimation and experimental measurements of the speed of sound in blood plasma from different samples analyzed, using an ultrasonic field with resonance frequency of 5 MHz. The results showed that the measurements on ultrasonic signals can contribute to the hematological predictions based on the linear regression model applied to the relationship between experimental ultrasonic parameters calculated and platelet concentration, indicating a growth rate of 1 m/s for each 0.90 x103 platelet per mm3. On the other hand, the attenuation coefficient presented changes of 20% in the platelet concentration using a resolution of 0.057 dB/cm MHz.
Tumor angiogenesis assessment using multi-fluorescent scans on murine slices by Markov random field framework
Oumeima Laifa, Delphine Le Guillou-Buffello, Daniel Racoceanu
The fundamental role of vascular supply in tumor growth makes the evaluation of the angiogenesis crucial in assessing effect of anti-angiogenic therapies. Since many years, such therapies are designed to inhibit the vascular endothelial growth factor (VEGF). To contribute to the assessment of anti-angiogenic agent (Pazopanib) effect on vascular and cellular structures, we acquired data from tumors extracted from a murine tumor model using Multi- Fluorescence Scanning. In this paper, we implemented an unsupervised algorithm combining the Watershed segmentation and Markov Random Field model (MRF). This algorithm allowed us to quantify the proportion of apoptotic endothelial cells and to generate maps according to cell density. Stronger association between apoptosis and endothelial cells was revealed in the tumors receiving anti-angiogenic therapy (n = 4) as compared to those receiving placebo (n = 4). A high percentage of apoptotic cells in the tumor area are endothelial. Lower density cells were detected in tumor slices presenting higher apoptotic endothelial areas.
Motion and Gait Analysis
icon_mobile_dropdown
Normal human gait patterns in Peruvian individuals: an exploratory assessment using VICON motion capture system
R. Dongo, M. Moscoso, R. Callupe, et al.
Gait analysis is of clinical relevance for clinicians. However, normal gait patterns used in foreign literature could be different from local individuals. The aim of this study was to determine the normal gait patterns and parameters of Peruvian individuals in order to have a local referent for clinical assessments and making diagnosis and treatment Peruvian people with lower motor neuron injuries. A descriptive study with 34 subjects was conducted to assess their gait cycle. VICON® cameras were used to capture body movements. For the analyses, we calculated spatiotemporal gait parameters and average angles of displacement of the hip, knee, and ankle joints with their respective 95% confidence intervals. The results showed gait speed was 0.58m/s, cadence was 102.1steps/min, and the angular displacement of the hip, knee and ankle joints were all lower than those described in the literature. In the graphs, gait cycles were close to those reported in previous studies, but the parameters of speed, cadence and angles of displacements are lower than the ones shown in the literature. These results could be used as a better reference pattern in the clinical setting.
Quantifying gait patterns in Parkinson's disease
Parkinson’s disease (PD) is constituted by a set of motor symptoms, namely tremor, rigidity, and bradykinesia, which are usually described but not quantified. This work proposes an objective characterization of PD gait patterns by approximating the single stance phase a single grounded pendulum. This model estimates the force generated by the gait during the single support from gait data. This force describes the motion pattern for different stages of the disease. The model was validated using recorded videos of 8 young control subjects, 10 old control subjects and 10 subjects with Parkinson’s disease in different stages. The estimated force showed differences among stages of Parkinson disease, observing a decrease of the estimated force for the advanced stages of this illness.
Quantifying stimulus-response rehabilitation protocols by auditory feedback in Parkinson's disease gait pattern
External auditory cues stimulate motor related areas of the brain, activating motor ways parallel to the basal ganglia circuits and providing a temporary pattern for gait. In effect, patients may re-learn motor skills mediated by compensatory neuroplasticity mechanisms. However, long term functional gains are dependent on the nature of the pathology, follow-up is usually limited and reinforcement by healthcare professionals is crucial. Aiming to cope with these challenges, several researches and device implementations provide auditory or visual stimulation to improve Parkinsonian gait pattern, inside and outside clinical scenarios. The current work presents a semiautomated strategy for spatio-temporal feature extraction to study the relations between auditory temporal stimulation and spatiotemporal gait response. A protocol for auditory stimulation was built to evaluate the integrability of the strategy in the clinic practice. The method was evaluated in transversal measurement with an exploratory group of people with Parkinson’s (n = 12 in stage 1, 2 and 3) and control subjects (n =6). The result showed a strong linear relation between auditory stimulation and cadence response in control subjects (R=0.98 ±0.008) and PD subject in stage 2 (R=0.95 ±0.03) and stage 3 (R=0.89 ±0.05). Normalized step length showed a variable response between low and high gait velocity (0.2> R >0.97). The correlation between normalized mean velocity and stimulus was strong in all PD stage 2 (R>0.96) PD stage 3 (R>0.84) and controls (R>0.91) for all experimental conditions. Among participants, the largest variation from baseline was found in PD subject in stage 3 (53.61 ±39.2 step/min, 0.12 ± 0.06 in step length and 0.33 ± 0.16 in mean velocity). In this group these values were higher than the own baseline. These variations are related with direct effect of metronome frequency on cadence and velocity. The variation of step length involves different regulation strategies and could need others specific external cues. In conclusion the current protocol (and their selected parameters, kind of sound time for training, step of variation, range of variation) provide a suitable gait facilitation method specially for patients with the highest gait disturbance (stage 2 and 3). The method should be adjusted for initial stages and evaluated in a rehabilitation program.
Cerebral palsy characterization by estimating ocular motion
Jully González, Angélica Atehortúa, Ricardo Moncayo, et al.
Cerebral palsy (CP) is a large group of motion and posture disorders caused during the fetal or infant brain development. Sensorial impairment is commonly found in children with CP, i.e., between 40-75 percent presents some form of vision problems or disabilities. An automatic characterization of the cerebral palsy is herein presented by estimating the ocular motion during a gaze pursuing task. Specifically, After automatically detecting the eye location, an optical flow algorithm tracks the eye motion following a pre-established visual assignment. Subsequently, the optical flow trajectories are characterized in the velocity-acceleration phase plane. Differences are quantified in a small set of patients between four to ten years.
Brain
icon_mobile_dropdown
Examination of corticothalamic fiber projections in United States service members with mild traumatic brain injury
Faisal M. Rashid, Emily L. Dennis, Julio E. Villalon-Reina, et al.
Mild traumatic brain injury (mTBI) is characterized clinically by a closed head injury involving differential or rotational movement of the brain inside the skull. Over 3 million mTBIs occur annually in the United States alone. Many of the individuals who sustain an mTBI go on to recover fully, but around 20% experience persistent symptoms. These symptoms often last for many weeks to several months. The thalamus, a structure known to serve as a global networking or relay system for the rest of the brain, may play a critical role in neurorehabiliation and its integrity and connectivity after injury may also affect cognitive outcomes. To examine the thalamus, conventional tractography methods to map corticothalamic pathways with diffusion-weighted MRI (DWI) lead to sparse reconstructions that may contain false positive fibers that are anatomically inaccurate. Using a specialized method to zero in on corticothalamic pathways with greater robustness, we noninvasively examined corticothalamic fiber projections using DWI, in 68 service members. We found significantly lower fractional anisotropy (FA), a measure of white matter microstructural integrity, in pathways projecting to the left pre- and postcentral gyri – consistent with sensorimotor deficits often found post-mTBI. Mapping of neural circuitry in mTBI may help to further our understanding of mechanisms underlying recovery post-TBI.
Volumetric multimodality neural network for brain tumor segmentation
Laura Silvana Castillo, Laura Alexandra Daza, Luis Carlos Rivera, et al.
Brain lesion segmentation is one of the hardest tasks to be solved in computer vision with an emphasis on the medical field. We present a convolutional neural network that produces a semantic segmentation of brain tumors, capable of processing volumetric data along with information from multiple MRI modalities at the same time. This results in the ability to learn from small training datasets and highly imbalanced data. Our method is based on DeepMedic, the state of the art in brain lesion segmentation. We develop a new architecture with more convolutional layers, organized in three parallel pathways with different input resolution, and additional fully connected layers. We tested our method over the 2015 BraTS Challenge dataset, reaching an average dice coefficient of 84%, while the standard DeepMedic implementation reached 74%.
Gaussian mixture models for detection of autism spectrum disorders (ASD) in magnetic resonance imaging
Javier Almeida, Nelson Velasco, Charlens Alvarez, et al.
Autism Spectrum Disorder (ASD) is a complex neurological condition characterized by a triad of signs: stereotyped behaviors, verbal and non-verbal communication problems. The scientific community has been interested on quantifying anatomical brain alterations of this disorder. Several studies have focused on measuring brain cortical and sub-cortical volumes. This article presents a fully automatic method which finds out differences among patients diagnosed with autism and control patients. After the usual pre-processing, a template (MNI152) is registered to an evaluated brain which becomes then a set of regions. Each of these regions is the represented by the normalized histogram of intensities which is approximated by mixture of Gaussian (GMM). The gray and white matter are separated to calculate the mean and standard deviation of each Gaussian. These features are then used to train, region per region, a binary SVM classifier. The method was evaluated in an adult population aged from 18 to 35 years, from the public database Autism Brain Imaging Data Exchange (ABIDE). Highest discrimination values were found for the Right Middle Temporal Gyrus, with an Area Under the Curve (AUC) of the Receiver Operating Characteristic (ROC) the curve of 0.72.
Brain cortical structural differences between non-central nervous system cancer patients treated with and without chemotherapy compared to non-cancer controls: a cross-sectional pilot MRI study using clinically indicated scans
Mark S. Shiroishi, Vikash Gupta, Bavrina Bigjahan, et al.
Background: Increases in cancer survival have made understanding the basis of cancer-related cognitive impairment (CRCI) more important. CRCI neuroimaging studies have traditionally used dedicated research brain MRIs in breast cancer survivors with small sample sizes; little is known about other non-CNS cancers. However, there is a wealth of unused data from clinically-indicated MRIs that could be used to study CRCI. Objective: Evaluate brain cortical structural differences in those with non-CNS cancers using clinically-indicated MRIs. Design: Cross-sectional Patients: Adult non-CNS cancer and non-cancer control (C) patients who underwent clinically-indicated MRIs. Methods: Brain cortical surface area and thickness were measured using 3D T1-weighted images. An age-adjusted linear regression model was used and the Benjamini and Hochberg false discovery rate (FDR) corrected for multiple comparisons. Group comparisons were: cancer cases with chemotherapy (Ch+), cancer cases without chemotherapy (Ch-) and subgroup of lung cancer cases with and without chemotherapy vs C. Results: Sixty-four subjects were analyzed: 22 Ch+, 23 Ch- and 19 C patients. Subgroup analysis of 16 LCa was also performed. Statistically significant decreases in either cortical surface area or thickness were found in multiple ROIs primarily within the frontal and temporal lobes for all comparisons. Limitations: Several limitations were apparent including a small sample size that precluded adjustment for other covariates. Conclusions: Our preliminary results suggest that various types of non-CNS cancers, both with and without chemotherapy, may result in brain structural abnormalities. Also, there is a wealth of untapped clinical MRIs that could be used for future CRCI studies.
Alzheimer's Disease
icon_mobile_dropdown
Quantifying cognition and behavior in normal aging, mild cognitive impairment, and Alzheimer's disease
The diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI) is based on neuropsychological evaluation of the patient. Different cognitive and memory functions are assessed by a battery of tests that are composed of items devised to specifically evaluate such upper functions. This work aims to identify and quantify the factors that determine the performance in neuropsychological evaluation by conducting an Exploratory Factor Analysis (EFA). For this purpose, using data from the Alzheimer's Disease Neuroimaging Initiative (ADNI), EFA was applied to 67 item scores taken from the baseline neuropsychological battery of the three phases of ADNI study. The found factors are directly related to specific brain functions such as memory, behavior, orientation, or verbal fluency. The identification of factors is followed by the calculation of factor scores given by weighted linear combinations of the items scores.
Characterizing brain patterns in conversion from mild cognitive impairment (MCI) to Alzheimer's disease
Structural Magnetic Resonance (MR) brain images should provide quantitative information about the stage and progression of Alzheimer’s disease. However, the use of MRI is limited and practically reduced to corroborate a diagnosis already performed with neuropsychological tools. This paper presents an automated strategy for extraction of relevant anatomic patterns related with the conversion from mild cognitive impairment (MCI) to Alzheimer’s disease (AD) using T1-weighted MR images. The process starts by representing each of the possible classes with models generated from a linear combination of volumes. The difference between models allows us to establish which are the regions where relevant patterns might be located. The approach searches patterns in a space of brain sulci, herein approximated by the most representative gradients found in regions of interest defined by the difference between the linear models. This hypothesis is assessed by training a conventional SVM model with the found relevant patterns under a leave-one-out scheme. The resultant AUC was 0.86 for the group of women and 0.61 for the group of men.
Deep-learning-based classification of FDG-PET data for Alzheimer's disease categories
Shibani Singh, Anant Srivastava, Liang Mi, et al.
Fluorodeoxyglucose (FDG) positron emission tomography (PET) measures the decline in the regional cerebral metabolic rate for glucose, offering a reliable metabolic biomarker even on presymptomatic Alzheimer’s disease (AD) patients. PET scans provide functional information that is unique and unavailable using other types of imaging. However, the computational efficacy of FDG-PET data alone, for the classification of various Alzheimers Diagnostic categories, has not been well studied. This motivates us to correctly discriminate various AD Diagnostic categories using FDG-PET data. Deep learning has improved state-of-the-art classification accuracies in the areas of speech, signal, image, video, text mining and recognition. We propose novel methods that involve probabilistic principal component analysis on max-pooled data and mean-pooled data for dimensionality reduction, and multilayer feed forward neural network which performs binary classification. Our experimental dataset consists of baseline data of subjects including 186 cognitively unimpaired (CU) subjects, 336 mild cognitive impairment (MCI) subjects with 158 Late MCI and 178 Early MCI, and 146 AD patients from Alzheimer’s Disease Neuroimaging Initiative (ADNI) dataset. We measured F1-measure, precision, recall, negative and positive predictive values with a 10-fold cross validation scheme. Our results indicate that our designed classifiers achieve competitive results while max pooling achieves better classification performance compared to mean-pooled features. Our deep model based research may advance FDG-PET analysis by demonstrating their potential as an effective imaging biomarker of AD.
Detection of the default mode network by an anisotropic analysis
This document presents a proposal devoted to improve the detection of the default mode network (DMN) in resting state functional magnetic resonance imaging in noisy conditions caused by head movement. The proposed approach is inspired by the hierarchical treatment of information, in particular at the level of the brain basal ganglia. Essentially, the fact that information must be selected and reduced suggests propagation of information in the Central Nervous System (CNS) is anisotropic. Under this hypothesis, the reconstruction of information of activation should follow an anisotropic pattern. In this work, an anisotropic filter is used to recover the DMN that is perturbed by simulated motion artifacts. Results obtained show this approach outperforms the state-of-the-art methods by 5.93% PSNR.
e-Health and Patient Empowerment
icon_mobile_dropdown
Empowerment of diabetic patients through mHealth technologies and education: development of a pilot self-management application
G. Gustin, B. Macq, D. Gruson, et al.
Diabetes is a major, global and increasing condition that occurs when the insulin-glucagon regulatory mechanism is affected, leading to uncontrolled hyper- and hypoglycaemia events that may be life-threatening. However, it has been shown that through daily monitoring, appropriate patient-specific empowerment, lifestyle behavior of diabetics can be positively influenced and the associated and costly diabetes complications significantly reduced. As personal face-to-face coaching is costly and hard to scale, mobile applications and services have now become a key driver of mobile Health (mHealth) deployment, especially as a helpful way for self-management.

Despite the huge mHealth market, a major limitation of many diabetes apps is that they do not use inputted data to help patients determine their daily insulin doses. On the other hand, the majority of existing insulin dose calculator apps provide no protection against - or even may actively contribute to - incorrect or inappropriate dose recommendations that put users at risk. Besides, there is clear evidence that lack of education on insulinotherapy and carbohydrate counting is associated with higher blood glucose variability with type 1 diabetes. Hence, there is a need for an accurate modelling of glucose-insulin dynamics together as well as providing adequate educational support.

The aims of this paper are: a) to highlight the usefulness of mHealth technologies in chronic disease management; b) to describe and discuss the development of an insulin bolus calculator integrated into a pilot mHealth app; c) to underline the importance of diabetes self-management education.
Phase-based motion magnification video for monitoring of vital signals using the Hermite transform
In this paper we present a new Eulerian phase-based motion magnification technique using the Hermite Transform (HT) decomposition that is inspired in the Human Vision System (HVS). We test our method in one sequence of the breathing of a newborn baby and on a video sequence that shows the heartbeat on the wrist. We detect and magnify the heart pulse applying our technique. Our motion magnification approach is compared to the Laplacian phase based approach by means of quantitative metrics (based on the RMS error and the Fourier transform) to measure the quality of both reconstruction and magnification. In addition a noise robustness analysis is performed for the two methods.
Brain-computer interface based on detection of movement intention as a means of brain wave modulation enhancement
Movement intention (MI) is the mental state in which it is desired to make an action that implies movement. There are certain signals that are directly related with MI; mainly obtained in the primary motor cortex. These signals can be used in a brain-computer interface (BCI). BCIs have a wide variety of applications for the general population, classified in two groups: optimization of conventional neuromuscular performances and enhancement of conventional neuromuscular performances beyond normal capacities. The main goal of this project is to analyze if neural rhythm modulation enhancement could be achieved by practicing, through a BCI based on MI detection, which was designed in a previous study. A six-session experiment was made with eight healthy subjects. Each session was composed by two stages: a training stage and a testing stage, which allowed control of a videogame. The scores in the game were recorded and analyzed. Changes in alpha and beta bands were also analyzed in order to observe if attention could in fact be enhanced. The obtained results were partially satisfactory, as most subjects showed a clear improvement in performance at some point in the trials. As well, the alpha to beta wave ratio of all the tasks was analyzed to observe if there are changes as the experiment progresses. The results are promising, and a different protocol must be implemented to assess the impact of the BCI on the attention span, which can be analyzed with the alpha and beta waves.
Fetal and Pediatric Brain Imaging
icon_mobile_dropdown
Cranial thickness changes in early childhood
Niharika Gajawelli, Sean Deoni, Jie Shi, et al.
The neurocranium changes rapidly in early childhood to accommodate the developing brain. However, developmental disorders may cause abnormal growth of the neurocranium, the most common one being craniosynostosis, affecting about 1 in 2000 children. It is important to understand how the brain and neurocranium develop together to understand the role of the neurocranium in neurodevelopmental outcomes. However, the neurocranium is not as well studied as the human brain in early childhood, due to a lack of imaging data. CT is typically employed to investigate the cranium, but, due to ionizing radiation, may only be used for clinical cases. However, the neurocranium is also visible on magnetic resonance imaging (MRI). Here, we used a large dataset of MRI images from healthy children in the age range of 1 to 2 years old and extracted the neurocranium. A conformal geometry based analysis pipeline is implemented to determine a set of statistical atlases of the neurocranium. A growth model of the neurocranium will help us understand cranial bone and suture development with respect to the brain, which will in turn inform better treatment strategies for neurocranial disorders.
Altered network topology in pediatric traumatic brain injury
Emily L. Dennis, Faisal Rashid, Talin Babikian, et al.
Outcome after a traumatic brain injury (TBI) is quite variable, and this variability is not solely accounted for by severity or demographics. Identifying sub-groups of patients who recover faster or more fully will help researchers and clinicians understand sources of this variability, and hopefully lead to new therapies for patients with a more prolonged recovery profile. We have previously identified two subgroups within the pediatric TBI patient population with different recovery profiles based on an ERP-derived (event-related potential) measure of interhemispheric transfer time (IHTT). Here we examine structural network topology across both patient groups and healthy controls, focusing on the ‘rich-club’ - the core of the network, marked by high degree nodes. These analyses were done at two points post-injury - 2-5 months (post-acute), and 13-19 months (chronic). In the post-acute time-point, we found that the TBI-slow group, those showing longitudinal degeneration, showed hyperconnectivity within the rich-club nodes relative to the healthy controls, at the expense of local connectivity. There were minimal differences between the healthy controls and the TBI-normal group (those patients who show signs of recovery). At the chronic phase, these disruptions were no longer significant, but closer analysis showed that this was likely due to the loss of power from a smaller sample size at the chronic time-point, rather than a sign of recovery. We have previously shown disruptions to white matter (WM) integrity that persist and progress over time in the TBI-slow group, and here we again find differences in the TBI-slow group that fail to resolve over the first year post-injury.
Statistical shape (ASM) and appearance (AAM) models for the segmentation of the cerebellum in fetal ultrasound
The cerebellum is an important structure to determine the gestational age of the fetus, moreover most of the abnormalities it presents are related to growth disorders. In this work, we present the results of the segmentation of the fetal cerebellum applying statistical shape and appearance models. Both models were tested on ultrasound images of the fetal brain taken from 23 pregnant women, between 18 and 24 gestational weeks. The accuracy results obtained on 11 ultrasound images show a mean Hausdorff distance of 6.08 mm between the manual segmentation and the segmentation using active shape model, and a mean Hausdorff distance of 7.54 mm between the manual segmentation and the segmentation using active appearance model. The reported results demonstrate that the active shape model is more robust in the segmentation of the fetal cerebellum in ultrasound images.
Bayesian automated cortical segmentation for neonatal MRI
Zane Chou, Natacha Paquette, Bhavana Ganesh, et al.
Several attempts have been made in the past few years to develop and implement an automated segmentation of neonatal brain structural MRI. However, accurate automated MRI segmentation remains challenging in this population because of the low signal-to-noise ratio, large partial volume effects and inter-individual anatomical variability of the neonatal brain. In this paper, we propose a learning method for segmenting the whole brain cortical grey matter on neonatal T2-weighted images. We trained our algorithm using a neonatal dataset composed of 3 fullterm and 4 preterm infants scanned at term equivalent age. Our segmentation pipeline combines the FAST algorithm from the FSL library software and a Bayesian segmentation approach to create a threshold matrix that minimizes the error of mislabeling brain tissue types. Our method shows promising results with our pilot training set. In both preterm and full-term neonates, automated Bayesian segmentation generates a smoother and more consistent parcellation compared to FAST, while successfully removing the subcortical structure and cleaning the edges of the cortical grey matter. This method show promising refinement of the FAST segmentation by considerably reducing manual input and editing required from the user, and further improving reliability and processing time of neonatal MR images. Further improvement will include a larger dataset of training images acquired from different manufacturers.
Signal Analysis
icon_mobile_dropdown
Analysis of sEMG signals using discrete wavelet transform for muscle fatigue detection
L. A. Flórez-Prias, S. H. Contreras-Ortiz
The purpose of the present article is to characterize sEMG signals to determine muscular fatigue levels. To do this, the signal is decomposed using the discrete wavelet transform, which offers noise filtering features, simplicity and efficiency. sEMG signals on the forearm were acquired and analyzed during the execution of cyclic muscular contractions in the presence and absence of fatigue. When the muscle fatigues, the sEMG signal shows a more erratic behavior of the signal as more energy is required to maintain the effort levels.
Wavelets analysis for differentiating solid, non-macroscopic fat containing, enhancing renal masses: a pilot study
Purpose: To evaluate potential use of wavelets analysis in discriminating benign and malignant renal masses (RM) Materials and Methods: Regions of interest of the whole lesion were manually segmented and co-registered from multiphase CT acquisitions of 144 patients (98 malignant RM: renal cell carcinoma (RCC) and 46 benign RM: oncocytoma, lipid-poor angiomyolipoma). Here, the Haar wavelet was used to analyze the grayscale images of the largest segmented tumor in the axial direction. Six metrics (energy, entropy, homogeneity, contrast, standard deviation (SD) and variance) derived from 3-levels of image decomposition in 3 directions (horizontal, vertical and diagonal) respectively, were used to quantify tumor texture. Independent t-test or Wilcoxon rank sum test depending on data normality were used as exploratory univariate analysis. Stepwise logistic regression and receiver operator characteristics (ROC) curve analysis were used to select predictors and assess prediction accuracy, respectively. Results: Consistently, 5 out of 6 wavelet-based texture measures (except homogeneity) were higher for malignant tumors compared to benign, when accounting for individual texture direction. Homogeneity was consistently lower in malignant than benign tumors irrespective of direction. SD and variance measured in the diagonal direction on the corticomedullary phase showed significant (p<0.05) difference between benign versus malignant tumors. The multivariate model with variance (3 directions) and SD (vertical direction) extracted from the excretory and pre-contrast phase, respectively showed an area under the ROC curve (AUC) of 0.78 (p < 0.05) in discriminating malignant from benign. Conclusion: Wavelet analysis is a valuable texture evaluation tool to add to a radiomics platforms geared at reliably characterizing and stratifying renal masses.
Multi-channel non-invasive fetal electrocardiography detection using wavelet decomposition
Javier Almeida , Josué Ruano, Germán Corredor, et al.
Non-invasive fetal electrocardiography (fECG) has attracted the medical community because of the importance of fetal monitoring. However, its implementation in clinical practice is challenging: the fetal signal has a low Signal- to-Noise-Ratio and several signal sources are present in the maternal abdominal electrocardiography (AECG). This paper presents a novel method to detect the fetal signal from a multi-channel maternal AECG. The method begins by applying filters and signal detrending the AECG signals. Afterwards, the maternal QRS complexes are identified and subtracted. The residual signals are used to detect the fetal QRS complex. Intervals of these signals are analyzed by using a wavelet decomposition. The resulting representation feds a previously trained Random Forest (RF) classifier that identifies signal intervals associated to fetal QRS complex. The method was evaluated on a public available dataset: the Physionet2013 challenge. A set of 50 maternal AECG records were used to train the RF classifier. The evaluation was carried out in signals intervals extracted from additional 25 maternal AECG. The proposed method yielded an 83:77% accuracy in the fetal QRS complex classification task.
Fetal ECG extraction using independent component analysis by Jade approach
Jader Giraldo-Guzmán, Sonia H. Contreras-Ortiz, Gloria Isabel Bautista Lasprilla, et al.
Fetal ECG monitoring is a useful method to assess the fetus health and detect abnormal conditions. In this paper we propose an approach to extract fetal ECG from abdomen and chest signals using independent component analysis based on the joint approximate diagonalization of eigenmatrices approach. The JADE approach avoids redundancy, what reduces matrix dimension and computational costs. Signals were filtered with a high pass filter to eliminate low frequency noise. Several levels of decomposition were tested until the fetal ECG was recognized in one of the separated sources output. The proposed method shows fast and good performance.
MRI Phantoms
icon_mobile_dropdown
Manufacture and characterization of breast tissue phantoms for emulating benign lesions
J. A. Villamarín, M. A. Rojas, O. M. Potosi, et al.
Phantoms elaboration has turned a very important field of study during the last decades due to its applications in medicine. These objects are capable of emulating or mimicking acoustically biological tissues in which parameters like speed of sound (SOS) and attenuation are successfully attained. However, these materials are expensive depending on their characteristics (USD $460.00 - $6000.00) and is difficult to have precise measurements because of their composition. This paper presents the elaboration and characterization of low cost (~ USD $25.00) breast phantoms which emulate histological normality and pathological conditions in order to support algorithm calibration procedures in imaging diagnosis. Quantitative ultrasound (QUS) was applied to estimate SOS and attenuation values for breast tissue (background) and benign lesions (fibroadenoma and cysts). Results showed values of the SOS and attenuation for the background between 1410 - 1450 m/s and 0.40 - 0.55 dB/cm at 1 MHz sampling frequency, respectively. On the other hand, the SOS obtained for the lesions ranges from 1350 to 1700 m/s and attenuation values between 0.50 - 1.80 dB/cm at 1 MHz. Finally, the fabricated phantoms allowed for obtaining ultrasonograms comparable with real ones whose acoustic parameters are in agree with those reported in the literature.
Design and implementation of a MRI compatible and dynamic phantom simulating the motion of a tumor in the liver under the breathing cycle
Arnould Geelhand de Merxem, Vianney Lechien, Tanguy Thibault, et al.
In the context of cancer treatment by proton therapy, research is carried out on the use magnetic resonance imaging (MRI) to perform real-time tracking of tumors during irradiation. The purpose of this combination is to reduce the irradiation of healthy tissues surrounding the tumor, while using a non-ionizing imaging method. Therefore, it is necessary to validate the tracking algorithms on real-time MRI sequences by using physical simulators, i.e. a phantom. Our phantom is a device representing a liver with hepatocellular carcinoma, a stomach and a pancreas close to the anatomy and the magnetic properties of the human body, animated by a motion similar to the one induced by the respiration. Many anatomical or mobile phantoms already exist, but the purpose here is to combine a reliable representation of the abdominal organs with the creation and the evaluation of a programmable movement in the same device, which makes it unique. The phantom is composed of surrogate organs made of CAGN gels. These organs are placed in a transparent box filled with water and attached to an elastic membrane. A programmable electro-pneumatic system creates a movement, similarly to a human diaphragm, by inflating and deflating the membrane. The average relaxation times of the synthetic organs belongs to a range corresponding to the human organs values (T1 = [458.7-1660] ms, T2 = [39.3-89.1] ms). The displacement of the tumor is tracked in real time by a camera inside the MRI. The amplitude of the movement varies from 12.8 to 20.1 mm for a periodic and repeatable movement. Irregular breath patterns can be created with a maximum amplitude of 40 mm.
Construction of mammography phantoms with a 3D printer and tested with a TIMEPIX system
J. S. Calderón-García, G. A. Roque, C. A. Ávila
We present a new mammography phantom made of hydroxyapatite crystals with different sizes and shapes, to emulate anthropomorphic microcalcifications, which we locate at different depths of a PMMA embedding material. The aim of the phantom presented is to address some issues of the standard commercial ones that are being used for comparing 3D vs 2D mammography systems. We present X-ray images, taken under the same conditions, for both a commercial phantom and the new proposed phantom. We compare signal to noise ratios (SNR) obtained for both cases. This phantom has been constructed to be easily assembled within different configurations to emulate modified features that might be of medical interest.
Feasibility study of a TIMEPIX detector for mammography applications
Carlos A. Ávila, Luis M. Mendoza, Gerardo A. Roque, et al.
We present a comparison study of two X-ray systems for mammography imaging. One is a SELENIA clinical system and the second is a TIMEPIX based system. The aim of the study is to determine the capability of a TIMEPIX detector for mammography applications. We first compare signal to noise ratio (SNR) of X-ray images of Al2O3 spheres with diameters of 0.16mm, 0.24mm and 0.32mm, of a commercial mammography accreditation phantom CIRS015, obtained with each system. Then, we make a similar comparison for a second phantom built with Hydroxyapatite crystals with different morphology and sizes ranging between 0.15mm and 0.83mm, which are embedded within the same block of PMMA of the CIRS015 phantom. Our study allows us to determine the minimum size of Al2O3 spheres on the order of 240μm, with 33% lower SNR for the TIMEPIX system as compared to the SELENIA system. When comparing the images of Hydroxyapatite crystals from both systems, the minimum size observed is about 300μm, with 23% lower SNR for TIMEPIX.
Image Processing
icon_mobile_dropdown
Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation
José Ignacio Orlando, Marcos Fracchia, Valeria del Río, et al.
Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.
Recognition of skin melanoma through dermoscopic image analysis
Melanoma skin cancer diagnosis can be challenging due to the similarities of the early stage symptoms with regular moles. Standardized visual parameters can be determined and characterized to suspect a melanoma cancer type. The automation of this diagnosis could have an impact in the medical field by providing a tool to support the specialists with high accuracy. The objective of this study is to develop an algorithm trained to distinguish a highly probable melanoma from a non-dangerous mole by the segmentation and classification of dermoscopic mole images. We evaluate our approach on the dataset provided by the International Skin Imaging Collaboration used in the International Challenge Skin Lesion Analysis Towards Melanoma Detection. For the segmentation task, we apply a preprocessing algorithm and use Otsu's thresholding in the best performing color space; the average Jaccard Index in the test dataset is 70.05%. For the subsequent classification stage, we use joint histograms in the YCbCr color space, a RBF Gaussian SVM trained with five features concerning circularity and irregularity of the segmented lesion, and the Gray Level Co-occurrence matrix features for texture analysis. These features are combined to obtain an Average Classification Accuracy of 63.3% in the test dataset.
Learning to segment mouse embryo cells
Juan León, Alejandro Pardo, Pablo Arbeláez
Recent advances in microscopy enable the capture of temporal sequences during cell development stages. However, the study of such sequences is a complex task and time consuming task. In this paper we propose an automatic strategy to adders the problem of semantic and instance segmentation of mouse embryos using NYU’s Mouse Embryo Tracking Database. We obtain our instance proposals as refined predictions from the generalized hough transform, using prior knowledge of the embryo’s locations and their current cell stage. We use two main approaches to learn the priors: Hand crafted features and automatic learned features. Our strategy increases the baseline jaccard index from 0.12 up to 0.24 using hand crafted features and 0.28 by using automatic learned ones.
Radiation Therapy Planning
icon_mobile_dropdown
Prostate cancer: computer-aided diagnosis on multiparametric MRI
Laura Marin, Daniel Racoceanu, Raphaele Renard Penna, et al.
Prostate cancer (PCa) is one of the most common cancers in men, being also the second most deadly cancer after lung cancer. There is increasing interest in active surveillance and minimally invasive focal therapies in PCa to avoid morbidities associated with whole gland therapy. Tumor volume represents an essential prognostic factor of PCa and the definition of index lesion volume is critical for appropriate decision making, especially for image guide focal treatment or in case of active surveillance. Multi-parametric Magnetic Resonance Imaging (mp-MRI) is the modality of choice for the detection and the localization of PCa foci. However, little has been published on mp-MRI accuracy in determining PCa volume, especially at 3T. There is insufficient evidence and no consensus to determine which of the methods for measuring volume is optimal.

The objective of this study concerns the elaboration of an algorithm for automatic interpretation of mp-MRI. We determine the accuracy of the proposed method by comparing the prostate tumor volume issued from the automated volumetric mp-MRI measurements of the tumoral region, with manual and semi-automated volumetric measurements done by and respectively with radiologists. Information issued from whole mount histopathology is used to validate the whole approach.
Quantification of dose uncertainties for the bladder in prostate cancer radiotherapy based on dominant eigenmodes
Richard Rios, Oscar Acosta, Caroline Lafond, et al.
In radiotherapy for prostate cancer the dose at the treatment planning for the bladder may be a bad surrogate of the actual delivered dose as the bladder presents the largest inter-fraction shape variations during treatment. This paper presents PCA models as a virtual tool to estimate dosimetric uncertainties for the bladder produced by motion and deformation between fractions. Our goal is to propose a methodology to determine the minimum number of modes required to quantify dose uncertainties of the bladder for motion/deformation models based on PCA. We trained individual PCA models using the bladder contours available from three patients with a planning computed tomography (CT) and on-treatment cone-beam CTs (CBCTs). Based on the above models and via deformable image registration (DIR), we estimated two accumulated doses: firstly, an accumulated dose obtained by integrating the planning dose over the Gaussian probability distribution of the PCA model; and secondly, an accumulated dose obtained by simulating treatment courses via a Monte Carlo approach. We also computed a reference accumulated dose for each patient using his available images via DIR. Finally, we compared the planning dose with the three accumulated doses, and we calculated local dose variability and dose-volume histogram uncertainties.
Improvement of Bragg peak shift estimation using dimensionality reduction techniques and predictive linear modeling
With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63±19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 ± 0.22 mm and 1.84 ± 8.98 mm for the learning model, while for PS method it ranged between 0.3 ± 0.25 mm and 20.71 ± 8.38 mm.
Towards an integrative computational model for simulating tumor growth and response to radiation therapy
Carlos Sosa Marrero, Vivien Aubert, Nicolas Ciferri, et al.
Understanding the response to irradiation in cancer radiotherapy (RT) may help devising new strategies with improved tumor local control. Computational models may allow to unravel the underlying radiosensitive mechanisms intervening in the dose-response relationship. By using extensive simulations a wide range of parameters may be evaluated providing insights on tumor response thus generating useful data to plan modified treatments. We propose in this paper a computational model of tumor growth and radiation response which allows to simulate a whole RT protocol. Proliferation of tumor cells, cell life-cycle, oxygen diffusion, radiosensitivity, RT response and resorption of killed cells were implemented in a multiscale framework. The model was developed in C++, using the Multi-formalism Modeling and Simulation Library (M2SL). Radiosensitivity parameters extracted from literature enabled us to simulate in a regular grid (voxel-wise) a prostate cell tissue. Histopathological specimens with different aggressiveness levels extracted from patients after prostatectomy were used to initialize in silico simulations. Results on tumor growth exhibit a good agreement with data from in vitro studies. Moreover, standard fractionation of 2 Gy/fraction, with a total dose of 80 Gy as a real RT treatment was applied with varying radiosensitivity and oxygen diffusion parameters. As expected, the high influence of these parameters was observed by measuring the percentage of survival tumor cell after RT. This work paves the way to further models allowing to simulate increased doses in modified hypofractionated schemes and to develop new patient-specific combined therapies.
Artificial Neural Network
icon_mobile_dropdown
Bone age detection via carpogram analysis using convolutional neural networks
Bone age assessment is a critical factor for determining delayed development in children, which can be a sign of pathologies such as endocrine diseases, growth abnormalities, chromosomal, neurological and congenital disorders among others. In this paper we present BoneNet, a methodology to assess automatically the skeletal maturity state in pediatric patients based on Convolutional Neural Networks. We train and evaluate our algorithm on a database of X-Ray images provided by the hospital Fundacion Santa Fe de Bogot ´ a with around 1500 images of patients between the ages 1 to 18. ´ We compare two different architectures to classify the given data in order to explore the generality of our method. To accomplish this, we define multiple binary age assessment problems, dividing the data by bone age and differentiating the patients by their gender. Thus, exploring several parameters, we develop BoneNet. Our approach is holistic, efficient, and modular, since it is possible for the specialists to use all the networks combined to determine how is the skeletal maturity of a patient. BoneNet achieves over 90% accuracy for most of the critical age thresholds, when differentiating the images between over or under a given age.
Automated detection of lung nodules with three-dimensional convolutional neural networks
Gustavo Pérez, Pablo Arbeláez
Lung cancer is the cancer type with highest mortality rate worldwide. It has been shown that early detection with computer tomography (CT) scans can reduce deaths caused by this disease. Manual detection of cancer nodules is costly and time-consuming. We present a general framework for the detection of nodules in lung CT images. Our method consists of the pre-processing of a patient’s CT with filtering and lung extraction from the entire volume using a previously calculated mask for each patient. From the extracted lungs, we perform a candidate generation stage using morphological operations, followed by the training of a three-dimensional convolutional neural network for feature representation and classification of extracted candidates for false positive reduction. We perform experiments on the publicly available LIDC-IDRI dataset. Our candidate extraction approach is effective to produce precise candidates with a recall of 99.6%. In addition, false positive reduction stage manages to successfully classify candidates and increases precision by a factor of 7.000.
Characterization of physiological networks in sleep apnea patients using artificial neural networks for Granger causality computation
Jhon Cárdenas, Alvaro D. Orjuela-Cañón, Alexander Cerquera, et al.
Different studies have used Transfer Entropy (TE) and Granger Causality (GC) computation to quantify interconnection between physiological systems. These methods have disadvantages in parametrization and availability in analytic formulas to evaluate the significance of the results. Other inconvenience is related with the assumptions in the distribution of the models generated from the data. In this document, the authors present a way to measure the causality that connect the Central Nervous System (CNS) and the Cardiac System (CS) in people diagnosed with obstructive sleep apnea syndrome (OSA) before and during treatment with continuous positive air pressure (CPAP). For this purpose, artificial neural networks were used to obtain models for GC computation, based on time series of normalized powers calculated from electrocardiography (EKG) and electroencephalography (EEG) signals recorded in polysomnography (PSG) studies.
Heart
icon_mobile_dropdown
Fusion of 4D echocardiography and cine cardiac magnetic resonance volumes using a salient spatio-temporal analysis
An accurate left (LV) and right ventricular (RV) function quantification is important to support evaluation, diagnosis and prognosis of cardiac pathologies such as the cardiomyopathies. Currently, diagnosis by ultrasound is the most cost-effective examination. However, this modality is highly noisy and operator dependent, hence prone to errors. Therefore, fusion with other cardiac modalities may provide complementary information and improve the analysis of the specific pathologies like cardiomyopathies. This paper proposes an automatic registration between two complementary modalities, 4D echocardiography and Magnetic resonance images, by mapping both modalities to a common space of salience where an optimal registration between them is estimated. The obtained matrix transformation is then applied to the MRI volume which is superimposed to the 4D echocardiography. Manually selected marks in both modalities are used to evaluate the precision of the superimposition. Preliminary results, in three evaluation cases, show the distance between these marked points and the estimated with the transformation is about 2 mm.
Detection of MRI artifacts produced by intrinsic heart motion using a saliency model
Cardiac Magnetic Resonance (CMR) requires synchronization with the ECG to correct many types of noise. However, the complex heart motion frequently produces displaced slices that have to be either ignored or manually corrected since the ECG correction is useless in this case. This work presents a novel methodology that detects the motion artifacts in CMR using a saliency method that highlights the region where the heart chambers are located. Once the Region of Interest (RoI) is set, its center of gravity is determined for the set of slices composing the volume. The deviation of the gravity center is an estimation of the coherence between the slices and is used to find out slices with certain displacement. Validation was performed with distorted real images where a slice is artificially misaligned with respect to set of slices. The displaced slice is found with a Recall of 84% and F Score of 68%.
Acute effect of Vagus nerve stimulation parameters on cardiac chronotropic, inotropic, and dromotropic responses
David Ojeda, Virginie Le Rolle, Hector M. Romero-Ugalde, et al.
Vagus nerve stimulation (VNS) is an established therapy for drug-resistant epilepsy and depression, and is considered as a potential therapy for other pathologies, including Heart Failure (HF) or inflammatory diseases. In the case of HF, several experimental studies on animals have shown an improvement in the cardiac function and a reverse remodeling of the cardiac cavity when VNS is applied. However, recent clinical trials have not been able to reproduce the same response in humans. One of the hypothesis to explain this lack of response is related to the way in which stimulation parameters are defined. The combined effect of VNS parameters is still poorly-known, especially in the case of VNS synchronously delivered with cardiac activity. In this paper, we propose a methodology to analyze the acute cardiovascular effects of VNS parameters individually, as well as their interactive effects. A Latin hypercube sampling method was applied to design a uniform experimental plan. Data gathered from this experimental plan was used to produce a Gaussian process regression (GPR) model in order to estimate unobserved VNS sequences. Finally, a Morris screening sensitivity analysis method was applied to each obtained GPR model. Results highlight dominant effects of pulse current, pulse width and number of pulses over frequency and delay and, more importantly, the degree of interactions between these parameters on the most important acute cardiovascular responses. In particular, high interacting effects between current and pulse width were found. Similar sensitivity profiles were observed for chronotropic, dromotropic and inotropic effects. These findings are of primary importance for the future development of closed-loop, personalized neuromodulator technologies.
Validation of a finite element method framework for cardiac mechanics applications
David Danan, Virginie Le Rolle, Arnaud Hubert, et al.
Modeling cardiac mechanics is a particularly challenging task, mainly because of the poor understanding of the underlying physiology, the lack of observability and the complexity of the mechanical properties of myocardial tissues. The choice of cardiac mechanic solvers, especially, implies several difficulties, notably due to the potential instability arising from the nonlinearities inherent to the large deformation framework. Furthermore, the verification of the obtained simulations is a difficult task because there is no analytic solutions for these kinds of problems. Hence, the objective of this work is to provide a quantitative verification of a cardiac mechanics implementation based on two published benchmark problems. The first problem consists in deforming a bar whereas the second problem concerns the inflation of a truncated ellipsoid-shaped ventricle, both in the steady state case. Simulations were obtained by using the finite element software GETFEM++. Results were compared to the consensus solution published by 11 groups and the proposed solutions were indistinguishable. The validation of the proposed mechanical model implementation is an important step toward the proposition of a global model of cardiac electro-mechanical activity.
Deep Learning and Deep Architectures
icon_mobile_dropdown
Automatic diabetic retinopathy classification
María A. Bravo, Pablo A. Arbeláez
Diabetic retinopathy (DR) is a disease in which the retina is damaged due to augmentation in the blood pressure of small vessels. DR is the major cause of blindness for diabetics. It has been shown that early diagnosis can play a major role in prevention of visual loss and blindness. This work proposes a computer based approach for the detection of DR in back-of-the-eye images based on the use of convolutional neural networks (CNNs). Our CNN uses deep architectures to classify Back-of-the-eye Retinal Photographs (BRP) in 5 stages of DR. Our method combines several preprocessing images of BRP to obtain an ACA score of 50.5%. Furthermore, we explore subproblems by training a larger CNN of our main classification task.
Semantic knowledge for histopathological image analysis: from ontologies to processing portals and deep learning
Yannick L. Kergosien, Daniel Racoceanu
This article presents our vision about the next generation of challenges in computational/digital pathology. The key role of the domain ontology, developed in a sustainable manner (i.e. using reference checklists and protocols, as the living semantic repositories), opens the way to effective/sustainable traceability and relevance feedback concerning the use of existing machine learning algorithms, proven to be very performant in the latest digital pathology challenges (i.e. convolutional neural networks). Being able to work in an accessible web-service environment, with strictly controlled issues regarding intellectual property (image and data processing/analysis algorithms) and medical data/image confidentiality is essential for the future. Among the web-services involved in the proposed approach, the living yellow pages in the area of computational pathology seems to be very important in order to reach an operational awareness, validation, and feasibility. This represents a very promising way to go to the next generation of tools, able to bring more guidance to the computer scientists and confidence to the pathologists, towards an effective/efficient daily use. Besides, a consistent feedback and insights will be more likely to emerge in the near future - from these sophisticated machine learning tools - back to the pathologists--, strengthening, therefore, the interaction between the different actors of a sustainable biomedical ecosystem (patients, clinicians, biologists, engineers, scientists etc.). Beside going digital/computational - with virtual slide technology demanding new workflows--, Pathology must prepare for another coming revolution: semantic web technologies now enable the knowledge of experts to be stored in databases, shared through the Internet, and accessible by machines. Traceability, disambiguation of reports, quality monitoring, interoperability between health centers are some of the associated benefits that pathologists were seeking. However, major changes are also to be expected for the relation of human diagnosis to machine based procedures. Improving on a former imaging platform which used a local knowledge base and a reasoning engine to combine image processing modules into higher level tasks, we propose a framework where different actors of the histopathology imaging world can cooperate using web services - exchanging knowledge as well as imaging services - and where the results of such collaborations on diagnostic related tasks can be evaluated in international challenges such as those recently organized for mitosis detection, nuclear atypia, or tissue architecture in the context of cancer grading. This framework is likely to offer an effective context-guidance and traceability to Deep Learning approaches, with an interesting promising perspective given by the multi-task learning (MTL) paradigm, distinguished by its applicability to several different learning algorithms, its non- reliance on specialized architectures and the promising results demonstrated, in particular towards the problem of weak supervision--, an issue found when direct links from pathology terms in reports to corresponding regions within images are missing.
Combining morphometric features and convolutional networks fusion for glaucoma diagnosis
Oscar Perdomo, John Arevalo, Fabio A. González
Glaucoma is an eye condition that leads to loss of vision and blindness. Ophthalmoscopy exam evaluates the shape, color and proportion between the optic disc and physiologic cup, but the lack of agreement among experts is still the main diagnosis problem. The application of deep convolutional neural networks combined with automatic extraction of features such as: the cup-to-disc distance in the four quadrants, the perimeter, area, eccentricity, the major radio, the minor radio in optic disc and cup, in addition to all the ratios among the previous parameters may help with a better automatic grading of glaucoma. This paper presents a strategy to merge morphological features and deep convolutional neural networks as a novel methodology to support the glaucoma diagnosis in eye fundus images.
Medical Software Development
icon_mobile_dropdown
A robustness test of the braided device foreshortening algorithm
Raquel Kale Moyano, Hector Fernandez, Juan M. Macho, et al.
Different computational methods have been recently proposed to simulate the virtual deployment of a braided stent inside a patient vasculature. Those methods are primarily based on the segmentation of the region of interest to obtain the local vessel morphology descriptors. The goal of this work is to evaluate the influence of the segmentation quality on the method named "Braided Device Foreshortening" (BDF). METHODS: We used the 3DRA images of 10 aneurysmatic patients (cases). The cases were segmented by applying a marching cubes algorithm with a broad range of thresholds in order to generate 10 surface models each. We selected a braided device to apply the BDF algorithm to each surface model. The range of the computed flow diverter lengths for each case was obtained to calculate the variability of the method against the threshold segmentation values. RESULTS: An evaluation study over 10 clinical cases indicates that the final length of the deployed flow diverter in each vessel model is stable, shielding maximum difference of 11.19% in vessel diameter and maximum of 9.14% in the simulated stent length for the threshold values. The average coefficient of variation was found to be 4.08 %. CONCLUSION: A study evaluating how the threshold segmentation affects the simulated length of the deployed FD, was presented. The segmentation algorithm used to segment intracranial aneurysm 3D angiography images presents small variation in the resulting stent simulation.
NecroQuant: quantitative assessment of radiological necrosis
Clinicians can now objectively quantify tumor necrosis by Hounsfield units and enhancement characteristics from multiphase contrast enhanced CT imaging. NecroQuant has been designed to work as part of a radiomics pipelines. The software is a departure from the conventional qualitative assessment of tumor necrosis, as it provides the user (radiologists and researchers) a simple interface to precisely and interactively define and measure necrosis in contrast-enhanced CT images. Although, the software is tested here on renal masses, it can be re-configured to assess tumor necrosis across variety of tumors from different body sites, providing a generalized, open, portable, and extensible quantitative analysis platform that is widely applicable across cancer types to quantify tumor necrosis.
Open-source software platform for medical image segmentation applications
Segmenting 2D and 3D images is a crucial and challenging problem in medical image analysis. Although several image segmentation algorithms have been proposed for different applications, no universal method currently exists. Moreover, their use is usually limited when detection of complex and multiple adjacent objects of interest is needed. In addition, the continually increasing volumes of medical imaging scans require more efficient segmentation software design and highly usable applications. In this context, we present an extension of our previous segmentation framework which allows the combination of existing explicit deformable models in an efficient and transparent way, handling simultaneously different segmentation strategies and interacting with a graphic user interface (GUI). We present the object-oriented design and the general architecture which consist of two layers: the GUI at the top layer, and the processing core filters at the bottom layer. We apply the framework for segmenting different real-case medical image scenarios on public available datasets including bladder and prostate segmentation from 2D MRI, and heart segmentation in 3D CT. Our experiments on these concrete problems show that this framework facilitates complex and multi-object segmentation goals while providing a fast prototyping open-source segmentation tool.
A quantitative reconstruction software suite for SPECT imaging
Mauro Namías, Robert Jeraj
Quantitative Single Photon Emission Tomography (SPECT) imaging allows for measurement of activity concentrations of a given radiotracer in vivo. Although SPECT has usually been perceived as non-quantitative by the medical community, the introduction of accurate CT based attenuation correction and scatter correction from hybrid SPECT/CT scanners has enabled SPECT systems to be as quantitative as Positron Emission Tomography (PET) systems. We implemented a software suite to reconstruct quantitative SPECT images from hybrid or dedicated SPECT systems with a separate CT scanner. Attenuation, scatter and collimator response corrections were included in an Ordered Subset Expectation Maximization (OSEM) algorithm. A novel scatter fraction estimation technique was introduced. The SPECT/CT system was calibrated with a cylindrical phantom and quantitative accuracy was assessed with an anthropomorphic phantom and a NEMA/IEC image quality phantom. Accurate activity measurements were achieved at an organ level. This software suite helps increasing quantitative accuracy of SPECT scanners.