SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Register Today

2017 SPIE Optics + Photonics | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail PageView PDF

Biomedical Optics & Medical Imaging

Biases from model assumptions in texture sub-cellular image segmentation

A supervised segmentation technique classifies pixels in sub-cellular microscopy images into biologically meaningful regions with associated model bias quantification.
13 November 2012, SPIE Newsroom. DOI: 10.1117/2.1201211.004236

Actin is the most abundant protein in most multicellular animal cells. It forms a diverse array of structures, particularly filaments, that participate in important processes such as cell motility, division, and contraction. The location and structure of actin filaments used in cell motility has been studied at whole cell spatial resolution. But studies at the sub-cellular level are limited by the optical diffraction limits of light microscopes and by the destructive nature of imaging at resolutions higher than half of the wavelength of visible light. Previous studies at the sub-cellular level focused on actin interaction with myosin, another protein with which it often works in concert.1 Our work focuses specifically on actin structures, and analyzes sub-cellular regions using optical confocal fluorescent microscopy images at 200 nanometer resolution.

Segmentation techniques partition images (in this case, of sub-cellular regions) into segments or sets of pixels. The goal is to simplify the image and make it easier to analyze. Several sub-cellular segmentation techniques, many of which are based on the Gaussian mixture model (GMM) Bayes classifier, have been developed by researchers.2–4 GMM Bayes classifier assumes that cell features follow a Gaussian probability density function and are conditionally independent. Only a few studies have focused on how well cell features actually conform to Bayes classifier GMMs.5 Our objective is to analyze the model biases in these sub-cellular region segmentation techniques and so learn how the sub-cellular structures differ from these Gaussian assumptions. The segmentation technique developed in this work is outlined in Figure 1. We used the gray-level co-occurrence matrix (GLCM) as a texture filter. Here, texture is the pattern of actin filaments in the sub-cellular regions. We partitioned images into hexagonal regions for spatial isotropy. Each hexagon was characterized by a vector of 15 features describing intensity, i.e., mean, mode, standard deviation, third moment, fourth moment, fifth moment, sixth moment, principal axes ratio, principal axes angle, and entropy; texture, i.e., directionality (angle), contrast, correlation, energy extracted from GLCM; and geometry, i.e., distance from known cell border. We estimated probabilities for GMM Bayes classification based on features from the training data. The region-based accuracy of classification was calculated on the remaining data by fourfold cross-validation.


Figure 1. Overview of the segmentation algorithm for biologically meaningful image representation. GMM: Gaussian Mixture Model.

In order to explore how cell features conformed to GMM assumptions, we evaluated feature normality and conditional independence. Our preliminary results showed that the majority of features do not follow normal distributions and many are conditionally dependent (data not reported). Both results confirm the importance of evaluating GMM outcome with respect to the conformity of features to GMM assumptions.

We also estimated classification accuracy as a function of the number of GMM components on two sub-cellular image sets containing, respectively, 88 and 70 fibroblast cells. We placed the cells on either soft or stiff extracellular matrix (ECM) and imaged one hour after seeding using confocal fluorescent microscopy imaging. Each cell is stained for actin, myosin, and focal adhesion channel. The IR channel was used to separate cell from background. The segmentation performance was evaluated against three-region reference segmentations obtained by expert visual inspection. The actin channel and segmentation images are shown in Figure 2. The GMM Bayes classifiers were applied only to actin images because of the biological focus on actin spatial distribution. For the ECMs, GMM classification accuracy was quantified with respect to the number of GMM components using hexagons with 127 pixel area: see Figure 3. The accuracy evaluation included a 10 pixel-wide background band around cell borders (region 4): see Figure 2.


Figure 2. Actin-stained cells and corresponding segmentations for (a) stiff and (b) soft extracellular matrix (ECM).

Figure 3. Classification accuracy versus the number of GMM components. The variation in classification accuracy is within the interval range of 0.06/0.02 for stiff/soft ECM conditions over a large range of GMM component numbers, implying only a small bias with respect to number of components in the case.
Table 1.Normality test for confidence level 90%. The values quantify the ratio of features used by the classifiers that satisfy the test.
Location T-testVariationDistribution
Chi-square testK-S test
Stiff ECM 0.013258 0.454545 0.551705
Soft ECM 0.005714 0.363571 0.485952

Our segmentation results enabled quantification of some biases introduced in GMM Bayes-based segmentation and provided a better understanding of the role of actin at the sub-cellular level. In this case the variation of classification accuracy with the number of GMM components is reasonably low, probably due to careful feature specification and to image properties. However, we expect that significantly higher biases will be introduced by non-conformity to cellular model assumptions in general. We intend to explore new techniques to quantify how much the accuracy of segmentation techniques suffers due to mismatches between the model assumptions and the real features of cells. This will in turn lead to improved feature design and selection techniques for image segmentation with applications to cell biology.

This work has been supported by the National Institute of Standards and Technology (NIST). We would like to acknowledge the Cell Systems Science Group, Biochemical Science Division, at NIST for providing the data, and the team members of the Computational Science in Biological Metrology project at NIST for providing invaluable inputs to our work.


Antonio Cardone
University of Maryland Institute for Advanced Computer Studies
and NIST
Gaithersburg, MD

Antonio Cardone joined NIST in 2005 and the University of Maryland Institute for Advanced Computer Studies in 2011. His research interests are image segmentation and tracking, computational geometry, and molecular dynamics.

Julien Amelot, Ya-Shian Li-Baboud, Mary Brady, Peter Bajcsy
NIST
Gaithersburg, MD

Julien Amelot joined NIST in 2008, where he focuses on time synchronization for the smart grid, as well as computer science bio-metrology. His areas of interest include robotics, artificial intelligence, computer vision, and neuroscience.

Ya-Shian Li-Baboud has been a computer scientist at NIST since 2001, serving as principal investigator for a variety of data quality projects in the electronics supply chain, semiconductor manufacturing, and smart grid areas. Her current research interests include data quality, computational biology, computer vision, and machine learning.

Mary Brady is the manager of the Information Systems Group in NIST's Information Technology Laboratory. The group is focused on developing measurements, standards, and underlying technologies that foster innovation throughout the information life cycle from collection and analysis to sharing and preservation.

Peter Bajcsy's research encompasses large-scale image-based analyses and syntheses using mathematical, statistical, and computational models, while leveraging computer science fields such as image processing, machine learning, computer vision, and pattern recognition. He has authored more than 21 papers in peer-reviewed journals and co-authored eight books or book chapters, and close to 100 conference papers.


References:
1. J. Martineau, R. Mokashi, D. Chapman, M. A. Grasso, M. Brady, Y. Yesha, Y. Yesha, A. Cardone, A. A. Dima, Subcellular feature detection and automated extraction of collocalized actin and myosin regions, Int'l Health Inf. Symp. 1, p. 399-408, 2012.
2. J. Gu, J. Chen, Q. Zhou, H. Zhang, Gaussian mixture model of texture for extracting residential area from high-resolution remotely sensed imagery, ISPRS Wrkshp. Updating Geo-spatial Databases Imagery/5th ISPRS Wrkshp. DMGISs, p. 157-162, 2004.
3. H. Permuter, J. Francos, I. Jermyn, A study of Gaussian mixture models of color and texture features for image classification and segmentation, Pattern Recognit. 39(4), p. 695-706, 2006.
4. G. Gimel, Supervised texture segmentation by maximising conditional likelihood, Tech. Rep. CITR-TR-99 Computer Science Department, University of Auckland, 2001.
5. X. Shi, R. Manduchi, A study on Bayes feature fusion for image classification, Proc. Comput. Vision Pattern Recognit. Conf. 8, p. 95, 2003.