Prostate cancer is one of the most prevalent cancer types among North American men. Magnetic resonance imaging (MRI) has shown higher accuracy than trans-rectal ultrasound (TRUS) in ascertaining the presence of the disease. Researchers are developing automated methods to accurately segment prostate cancer (i.e., outline tumors) for better diagnosis and prognostication. However, obtaining accurate MRI segmentations is challenging for both computer-based automated algorithms and human experts owing to large variations among patients. Moreover, most computer algorithms assume that texture and intensity values in MR images are the same for cancer and normal tissue. This is not true in the real world, and consequently the algorithms do not perform optimally.
The segmentation error introduced by interpatient variability has not previously been addressed. We decided to tackle this problem using a new iterative normalization algorithm.1 The algorithm incorporates a measurement called relative intensity. It mimics manual segmentation performed by human readers, who essentially compare the contrast between cancer and normal regions rather than considering actual intensity values.
We first apply min-max (MM) normalization to map the original intensity values from different MRI features into a comparable range. Then, we define the relative intensity value for each pixel by calculating the difference between normal and tumor pixels and normalize the difference by the mean intensity value of the class to which the pixel belongs. For example, if pixel i belongs to cancer, the relative intensity is where xi is the MM normalized intensity value, μnormalrepresents the mean intensity of normal tissue, and μcancer the mean intensity of cancer tissue. Figure 1 shows the distribution of prostate MRI features for 30 subjects (data points are the mean tumor and normal pixel values for each subject). The intensity values clearly vary considerably among subjects. In the original MRI feature domain the normal and tumor pixels overlap—see Figure 1(a)—whereas in the normalized feature domain, the normal and tumor pixels are well separated: see Figure 1(b).
Figure 1. Distribution of prostate MRI features from 30 subjects. (a) Normal and tumor pixels overlap in the original image feature domain. (b) Normal and tumor pixels are well separated in the normalized feature domain. ADC: Apparent diffusion coefficient. T2: Image contrast weighting type.
Because defining relative intensity creates new values for one class—e.g., cancer tissue—the definition also depends on the intensity value of the other class: e.g., normal tissue. Consequently, we have developed an iterative way of estimating relative intensity based on the current estimate of class membership (i.e., whether this pixel belongs to cancer or normal). In other words, we estimate relative intensity and class membership in an alternating fashion.
The new iterative algorithm can be summarized as a series of six steps: (1) use MM normalization to map each MRI feature into a comparable range; (2) calculate the relative intensity values for the training data set, and train the classifier; (3) initialize the estimate of relative intensity for the test subject by assuming that all the pixels are normal; (4) use the classifier obtained from step (2) to classify the test data into two groups, normal and cancer, based on the current estimate of relative intensities; (5) update the estimate of relative intensities based on the current estimate of class membership; and (6) repeat steps (4) and (5) until convergence.
We have successfully applied our method to prostate cancer localization using multispectral MRI. The Dice coefficient measurement, which indicates the similarity between ground truth and segmented tumor regions, has been improved by 17%. In addition, accuracy has been improved from 69 to 79% compared with results from the support vector machine (SVM), a widely used machine-learning algorithm for analyzing data and recognizing patterns.2 The improvement in accuracy is statistically significant (p≤0.05). Figure 2 shows a prostate tumor outlined by an experienced radiologist (a), segmented by SVM (b), and segmented by SVM with iterative normalization (c).
Figure 2. Segmentation results. SVM: Support vector machine.
In summary, our goal is to incorporate subject-specific information to improve supervised machine-learning algorithms for automated prostate cancer localization using multiparametric MRI. Supervised machine-learning algorithms segment tumor regions based on a classifier obtained from the training data set and classify pixels of the test subject as tumor and normal. An implicit assumption underlying these methods is that all the ‘patients’—both training data and test subjects—have similar properties, e.g., intensity distributions of prostate MRI images, for tumor and normal regions. Unfortunately, this assumption does not apply in the real world and produces erroneous segmentation results. To address this issue, we propose a new iterative normalization approach that takes into account the inherent differences between patients. The prostate cancer segmentation performance of our method is superior to that of three other state-of-the-art segmentation algorithms: SVM, local active contour,3 and fuzzy Markov random field.4 Applying the new method to other medical image data and applying the proposed relative intensity measurements with alternative supervised segmentation approaches are topics we will consider in the future.
We would like to thank Masoom Haider of Mount Sinai Hospital in Canada for sharing the multispectral MRI data set.
University of California, San Francisco
San Francisco, CA
Xin Liu received her PhD from the Illinois Institute of Technology (2011). She is currently a post-doctoral fellow in the Department of Radiology at the University of California, San Francisco. Her research interests include medical image analysis, computer vision, and machine learning.
Imam Samil Yetik
Electrical and Computer Engineering
Illinois Institute of Technology
Imam Samil Yetik received his BS and MS from Bogazici University and Bilkent University, respectively (both in Turkey), and his PhD from the University of Illinois at Chicago. Following postdoctoral positions at the University of Illinois at Chicago and the University of California at Davis, he joined the Medical Imaging Research Center at the Illinois Institute of Technology as an assistant professor. His research interests include signal- and image-processing techniques with application to biomedicine. He is a senior member of IEEE and the author or coauthor of over 50 publications in his research area.
1. X. Liu, I. S. Yetik, Iterative normalization method for improved prostate cancer localization with multispectral magnetic resonance imaging, J. Electron. Imag.
21, p. 023008, 2012. doi:10.1117/1.JEI.21.2.023008
2. S. Ozer, Supervised and unsupervised methods for prostate cancer segmentation with multispectral MRI, Med. Phys.
37, p. 1873-1883, 2010. doi:10.1118/1.3359459
4. X. Liu, Prostate cancer segmentation with simultaneous estimation of Markov random parameters and class, IEEE Trans. Med. Imag.
28(6), p. 906-915, 2009. doi:10.1109/TMI.2009.2012888