SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2019 | Call for Papers

2018 SPIE Optics + Photonics | Register Today



Print PageEmail PageView PDF

Biomedical Optics & Medical Imaging

A computer aid for assessing breast masses on ultrasound images

Preliminary results from an automated system for analyzing breast lesions suggest that it performs comparably to a breast radiologist.
5 March 2008, SPIE Newsroom. DOI: 10.1117/2.1200802.1037

Studies have shown that computer-aided diagnosis (CAD) could assist breast radiologists by providing an objective and reproducible second opinion.1 Ultrasonography has been shown to be an effective modality for breast cancer diagnosis.2 Accordingly, CAD systems have been developed to characterize breast masses on ultrasound images as malignant or benign.3,4 In most CAD systems, lesion segmentation (separating out the suspicious area from the surrounding tissue) is the first step in the assessment process. But in ultrasound images of breast masses, speckles, posterior acoustic shadowing, heterogeneous image intensity inside the mass, indistinctive mass boundary, and line structures caused by reverberation pose difficulties to segmentation. We previously reported that an active contour (AC) model can be trained to delineate the boundary. But the success of the AC model strongly depends on properly estimating an initial contour.4

In our current study, we propose an AC method that automatically calculates an initial contour for breast masses on ultrasound images. We compare the new results with manual delineation by two expert radiologists in terms of the agreement between the segmented areas and accuracy of classification using features extracted from the segmented masses.

Figure 1(a–j) shows the segmentation process. Given a region of interest (ROI) image that contains the mass approximately at the center (b), we multiply the ROI with an inverse Gaussian constraint function and preprocess the result using a mean shift filter (c). Pixel-by-pixel clustering followed by morphological operations and object selection are used to separate a preliminary mass region from the background (d). If the result of this preliminary mass segmentation indicates that posterior shadowing exists in the ROI, we repeat the clustering after processing the ROI image with a second mask to improve the separation between the mass region and shadowing (e). The preliminary mass region is refined with a region-growing algorithm (f)5 whose output is used as the initialization for the AC segmentation algorithm. The AC result (g) is examined using a criterion based on the image intensity gradient along the contour. The vertices of the segments that do not meet the criterion are adjusted by minimizing an energy function. The AC segmentation algorithm is applied a second time using the adjusted contour as initialization (h). The averages of the intensity gradient along the mass boundary segmented by the first and the second AC are compared, and the one with the higher average is chosen as the final result (i).

Figure 1. An example illustrates the segmentation process. (a) Original image, (b) ROI image, (c) multiplied with inverse Gaussian constraint function and preprocessed by mean shift filter, (d) result of clustering followed by morphological operations and object selection, (e) automated checking indicating no posterior shadowing in the ROI (i.e., the preliminary segmented boundary remains unchanged), (f) result of growing the region, (g) result of first AC segmentation, (h) result of second AC segmentation, (i) final segmented boundary (second AC is selected), (j) segmentation by radiologist R1. AC: active contour. ROI: region of interest.

To investigate the effect of segmentation on mass characterization, we extracted a first feature space consisting of texture, width-to-height, and posterior shadowing features based on computer segmentation, and a second feature space based on radiologist segmentation. A linear discriminant analysis classifier using stepwise feature selection was trained and tested using a leave-one-case-out method.

Our data set consisted of 509 ultrasound images from 258 masses (107 malignant and 151 benign). Two radiologists (R1 and R2) independently marked the ROI and manually segmented the mass. The ROI provided by R1 was used for computer segmentation, and the segmentation result of R1 was used as the reference. To evaluate performance, we used the area overlap measure (AOM), which is defined as the ratio of the intersection to the union of the areas of the evaluated segmented boundary and its reference counterpart. At 0.77±0.10 (mean±s.d.), the AOM for the computer segmentation was significantly higher (p = 0.002) than that for R2 (0.75±0.18). For case-based classification, the areas Az under the test receiver operating characteristic (ROC) curves were 0.89±0.02 and 0.88±0.02 for the feature spaces based on computer and R2 segmentation, respectively. The difference between the test Az values did not reach statistical significance (p=0.68). The Az values for R1's and R2's visual assessment of the masses were 0.89±0.02 and 0.86±0.02.

Our preliminary results indicate that the CAD system can achieve a performance similar to that of an experienced breast radiologist and thus may be useful as a second opinion to radiologists in evaluating ultrasound masses. The agreement of the delineated areas between the computerized method and R1 was higher than that between R1 and R2. One reason may be that the ROI extracted by R1 was used for computer segmentation. Work is under way to develop a method that does not rely on a predefined ROI for that part of the assessment.

This work is supported by the US Public Health Service grant CA 118305.

Jing Cui, Berkman Sahiner, Heang-Ping Chan, Alexis Nees, Chintana Paramagul, Lubomir Hadjiiski, Chuan Zhou, Jiazheng Shi 
Department of Radiology
University of Michigan
Ann Arbor, MI

Jing Cui received her PhD degree in electrical engineering from the University of Virginia in May 2006. She is currently a research postdoctoral fellow in the Department of Radiology at the University of Michigan. Her research interests include computer-aided diagnosis, pattern recognition, image processing, and video tracking.