Proceedings Volume 1905

Biomedical Image Processing and Biomedical Visualization

Raj S. Acharya, Dmitry B. Goldgof
cover
Proceedings Volume 1905

Biomedical Image Processing and Biomedical Visualization

Raj S. Acharya, Dmitry B. Goldgof
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 29 July 1993
Contents: 25 Sessions, 102 Papers, 0 Presentations
Conference: IS&T/SPIE's Symposium on Electronic Imaging: Science and Technology 1993
Volume Number: 1905

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Cardiac and Pulmonary Imaging: An Overview
  • Cardiac Image Segmentation and Registration I
  • Cardiac Image Segmentation and Registration II
  • Image Processing in Microscopy
  • Morphological Image Processing
  • Cardiac Motion Analysis I
  • Cardiac Motion Analysis II
  • Structural and Functional Imaging: Cardiac
  • Structural and Functional Imaging: Pulmonary
  • Biomedical Visualization I
  • Biomedical Visualization II
  • Use of Multiple Images in Detection of Asymmetry and Developing Densities
  • Digitization and Interpretation
  • Use of Wavelets for Mammogram Image Processing
  • Knowledge-Based Methods I
  • Knowledge-Based Methods II
  • Image Analysis I
  • Morphology and Intensity Map Techniques for Detecting Calcifications
  • Image Processing Techniques
  • Pattern Recognition and Classification I
  • Pattern Recognition and Classification II
  • Image Analysis II
  • Image Analysis I
  • Image Analysis II
  • Image Reconstruction I
  • Image Reconstruction II
  • Panel Discussion: Design of a Common Database for Research in Mammogram Image Analysis
Cardiac and Pulmonary Imaging: An Overview
icon_mobile_dropdown
Integration of multimodality images: success and future directions
Chin-Tu Chen
The concept of multi-modality image integration, in which images obtained from different sensors are co-registered spatially and various aspects of object characteristics revealed by individual imaging techniques are synergistically fused in order to yield new information, has received considerable attention in recent years. The initial success was made in visualizing integrated brain images which show the overlay of physiological information from PET or SPECT with anatomical information from CT or MRI, providing new knowledge of correlates of brain function and brain structure that was difficult to access previously. Extension of this concept to cardiac and pulmonary imaging is still in its infancy. One additional difficulty in dealing with cardiac/pulmonary data sets is the issue of motion. However, some features in periodic motion may offer additional information for the purpose of spatial co-registration. In addition to visualization of the fused image data in 2-D and 3-D, future directions in the arena of image integration from multiple modalities include multi-modal image reconstruction, multi-modal image segmentation and feature extraction, and other image analysis tasks that incorporate information available from multiple sources.
Cardiac Image Segmentation and Registration I
icon_mobile_dropdown
Cardiac MR image segmentation using deformable models
Ajit Singh, Lorenz von Kurowski, Ming-Yee Chiu
We describe a deformable model based technique for Cardiac MRI segmentation. The technique assumes that the data is available in the form of 2-D slices of the heart. An initial approximation of the boundary of the object of interest, say, the left ventricle, is specified in one of the slices, via a user-interface. The initial contour deforms to a contour with minimum energy, which is defined to be the correct ventricular boundary. This contour is then propagated to other slices, both in space and in time, to get the segmented volume at various instants in the cardiac cycle. This work is a part of our ongoing effort on cardiac MR analysis. The segmentation algorithm discussed here is intended to be a preprocessing stage for our work in volume computation and cardiac wall motion analysis. We have tested the segmentation algorithm extensively on over 500 images, and our clinical collaborators have found the results to acceptable, both qualitatively, and quantitatively. Our system is being installed for use in routine clinical practice.
Model-based localization of the left ventricle from cardiac MR scans
Leiguang Gong, Ting Cui, Casimir A. Kulikowski, et al.
A new approach for the extraction of the myocardium from MR cardiac scans is presented. Segmentation and recognition of the left ventricle by a sequence of generic image processing operations is carried out in an order determined by a model of domain-specific relational, spatial, and morphological knowledge of the cardiac images. In particular, a new technique for constrained surface deformation by variable morphological dilation is introduced. These methods, incorporated in a prototype system called CARDIAN, have produced encouraging results in initial experiments with MR scans from phantom, dog, and human studies.
Left-ventricular boundary detection from spatiotemporal volumetric CT images
Hsiao-Kun Tu, Art Matheny, Dmitry B. Goldgof
This paper presents a new technique for LV boundary detection from 3-D volumetric cardiac images. The proposed method consists of boundary detection and boundary refinement stages. In the boundary detection stage, a spatio-temporal (4-D) gradient operator is used to capture the temporal gradients of dynamic LV boundaries and to smooth time uncorrelated noise. Spatio-temporal edge detection is performed outward from an approximate center of the left ventricle. In the boundary refinement stage, spherical harmonic model is fitted to the detected boundaries. Based on this model, false boundaries are removed; LV boundaries are recovered. A left ventricle is a bright, smooth region, varying in size over the heart cycle. This a priori knowledge is incorporated in detection and refinement of LV boundaries to reduce the effect of noise. The intensity of the inner (close to the center) neighbors of the LV boundary is brighter than the outer. The size of the left ventricle is used in boundary refinement to select proper boundaries to be fitted by the spherical harmonic mode. We demonstrate the advantages of 4-D edge detection over 3-D and the use of spherical harmonics to refine LV boundaries. Our experimental data is supplied by Dr. Eric Hoffman at University of Pennsylvania medical school and consists of 16 volumetric (128 by 128 by 118) CT images taken through a heart cycle.
Interactive relaxation labeling for 3D cardiac image analysis
William E. Higgins, M. W. Hansen, Werner L. Sharp
Image segmentation remains one of the major challenges in 3D medical image analysis. We describe a generally applicable 3D image-segmentation technique that combines operator interaction and automatic processing, with a particular focus on 3D cardiac image analysis. For a given 3D image, the method works as follows. First, the operator interactively defines region cues that either give region 'tissue samples' or that impose spatial constraints on where regions can and cannot lie. Next, a three-step relaxation-labeling algorithm is applied. For the first step, each image voxel gets an initial probability vector assigned to it. This vector, computed using the previously defined region cues, contains the initial probabilities that a voxel belongs to various regions of interest. Next, a true 3D relaxation-labeling process is performed to update the probability vectors. Relaxation labeling concludes by assigning region labels to image voxels. Results for 3D cardiac image segmentation demonstrate the method's efficacy. A major advantage of the method is that the operator, who understands what he sees but has less understanding of the `numbers' defining the image, can apply the technique without having to set parameters.
Cardiac Image Segmentation and Registration II
icon_mobile_dropdown
Regional shape-based feature space for segmenting biomedical images using neural networks
Gopal Sundaramoorthy, John D. Hoford, Eric A. Hoffman
In biomedical images, structure of interest, particularly the soft tissue structures, such as the heart, airways, bronchial and arterial trees often have grey-scale and textural characteristics similar to other structures in the image, making it difficult to segment them using only gray- scale and texture information. However, these objects can be visually recognized by their unique shapes and sizes. In this paper we discuss, what we believe to be, a novel, simple scheme for extracting features based on regional shapes. To test the effectiveness of these features for image segmentation (classification), we use an artificial neural network and a statistical cluster analysis technique. The proposed shape-based feature extraction algorithm computes regional shape vectors (RSVs) for all pixels that meet a certain threshold criteria. The distance from each such pixel to a boundary is computed in 8 directions (or in 26 directions for a 3-D image). Together, these 8 (or 26) values represent the pixel's (or voxel's) RSV. All RSVs from an image are used to train a multi-layered perceptron neural network which uses these features to 'learn' a suitable classification strategy. To clearly distinguish the desired object from other objects within an image, several examples from inside and outside the desired object are used for training. Several examples are presented to illustrate the strengths and weaknesses of our algorithm. Both synthetic and actual biomedical images are considered. Future extensions to this algorithm are also discussed.
Cardiac image registration via global and local warping
Steven D. Kugelmass, C. H. Labovitz, Eric A. Hoffman
Image produced with magnetic resonance imaging (MRI), X-ray computed tomography (CT), and positron emission tomography (PET), produce complimentary anatomic and physiologic information. The synthesis of a single image, combining the structural information from MRI, with the corresponding physiologic and metabolic information from PET or CT would provide physicians and researchers with a means to correlate function with structure. We adapted a method originating with Goshtasby and later described by Wolberg which decomposes the anatomic image into regions which are then individually mapped to corresponding regions in the functional image. The mapping function is found by a least squares solution to a set of polynomial basis functions. The selection of the set of basis functions varies the 'order' of the mapping function from affine to the inclusion of shear and higher order deformations. Local deformations in the contour of the heart wall are adjusted by our method. Global high order deformations can also be effected. We tested our approach by developing a software module for use with our laboratory's image visualization and analysis system (VIDATM), and mapping MRI image of the canine heart (with extrinsic markers) to hand-traced contours from pathology slides of the heart from the same animal. We will present illustrations of these image mappings and show that this method successfully accounts and corrects for local deformations in the object boundaries as well as global scale and orientation mismatches, which are not addressed by previously described techniques.
Robust detection of lumen centerlines in complex coronary angiograms
Milan Sonka, Steve M. Collins
We have developed a method for lumen centerline detection based on simultaneous detection of approximate left and right coronary borders. This approach is motivated by the observation that a clinician visually identifies the lumen centerline as midway between the simultaneously determined left and right borders of the vessel segment of interest. Our lumen centerline detection algorithm and an algorithm based on a conventional method for individually identifying left and right coronary borders were tested using 89 complex coronary images. Selected manually-traced centerlines defined in a previous angioplasty study were used as an independent standard. Computer-detected and observer-defined centerlines were compared using five parameters (maximum and rms distances, maximum and average orientation differences, and orientation similarity index). The quality of centerlines determined using the new simultaneous centerline detection method, a modification incorporating an initial maximum brightness search, and a conventional centerline detection method was also assessed. Our new centerline detection method yielded accurate centerlines in the 89 complex images. Moreover, our method outperformed the conventional method as judged by all five calculated parameters (p < 0.001 for each parameter). Automated detection of lumen centerlines based on simultaneous detection of both coronary borders provides improved accuracy in coronary arteriograms with poor contrast, nearby or overlapping structures, or branching vessels.
Three-dimensional ventricle reconstruction from serial cross sections
Shiuh-Yung James Chen, John D. Carroll M.D., Chin-Tu Chen, et al.
A technique for constructing a 3D human heart model is proposed for study of anatomy of the heart as well as the dynamic changes in the shape of left and right ventricles throughout the cardiac cycle. The model consists of 3D surfaces of epi- and endo-cardium of the heart which are reconstructed from serial ultrafast CT cross-sectional images. For each cross section, the regions of epi-and endo-cardium are first identified by using an adaptive segmentation algorithm. The boundaries of these regions are then extracted. With these boundaries, additional cross sections are generated by means of an elastic interpolation algorithm. The set of interpolated cross sections are then employed to form a cardiac surface which is subsequently smoothed using parametric surface schemes.
Supervised interpretation of echocardiograms with a psychological model of expert supervision
Shriram V. Revankar, David B. Sher, Valerie L. Shalin, et al.
We have developed a collaborative scheme that facilitates active human supervision of the binary segmentation of an echocardiogram. The scheme complements the reliability of a human expert with the precision of segmentation algorithms. In the developed system, an expert user compares the computer generated segmentation with the original image in a user friendly graphics environment, and interactively indicates the incorrectly classified regions either by pointing or by circling. The precise boundaries of the indicated regions are computed by studying original image properties at that region, and a human visual attention distribution map obtained from the published psychological and psychophysical research. We use the developed system to extract contours of heart chambers from a sequence of two dimensional echocardiograms. We are currently extending this method to incorporate a richer set of inputs from the human supervisor, to facilitate multi-classification of image regions depending on their functionality. We are integrating into our system the knowledge related constraints that cardiologists use, to improve the capabilities of our existing system. This extension involves developing a psychological model of expert reasoning, functional and relational models of typical views in echocardiograms, and corresponding interface modifications to map the suggested actions to image processing algorithms.
Image Processing in Microscopy
icon_mobile_dropdown
Method for semiautomated serial section reconstruction and visualization of neural tissue from TEM images
Kevin N. Montgomery, Muriel D. Ross
A simple method to reconstruct details of neural tissue architectures from transmission electron microscope (TEM) images will help us to increase our knowledge of the functional organization of neural systems in general. To be useful, the reconstruction method should provide high resolution, quantitative measurement, and quick turnaround. In pursuit of these goals, we developed a modern, semiautomated system for reconstruction of neural tissue from TEM serial sections. Images are acquired by a video camera mounted on TEM (Zeiss 902) equipped with an automated stage control. The images are reassembled automatically as a mosaicked section using a crosscorrelation algorithm on a Connection Machine-2 (CM-2) parallel supercomputer. An object detection algorithm on a Silicon Graphics workstation is employed to aid contour extraction. An estimated registration between sections is computed and verified by the user. The contours are then tessellated into a triangle-based mesh. At this point the data can be visualized as a wireframe or solid object, volume rendered, or used as a basis for simulations of functional activity.
Characterization of a 3D microscope imaging system
Steven S. S. Poon, Stephen J. Lockett, Rabab K. Ward
To reconstruct the object from its observed images, the characteristics of the imaging system must first be obtained. In a microscope imaging system, the characteristics vary not only in the imaging plane, but also vary as a function of the focus in which the image is taken. Thus, a three dimensional system response or point spread function (PSF) needs to be determined. One way of determining the PSF is to use a theoretical approach to analyze the aberration-free microscope imaging system. However, the assumptions and properties of lenses in the system are often not ideal. Thus an experimental approach for determining the PSF is sometimes used. We report on the results of our experiments where some of the problems associated with the determination of the experimental PSF are overcome. Point source objects are hard to find in nature. In our analysis, we use test objects which simulates point sources, (such as small fluorescence beads,) and objects which can be described as a convolution of the PSF with their shape (such as a step edge). To increase the spatial resolution, the precise location of the object is also estimated to a fraction of a pixel. The results of this is then compared with the theoretical approach.
Image reconstruction for 3D light microscopy with a regularized linear method incorporating a smoothness prior
We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.
Thermal damage quantification from tissue birefringence image analysis
Tom J. McMurray, Andre Han, John Anthony Pearce
Decreased collagen or cardiac muscle birefringence in transmission polarizing microscopy is an observable measure of damaged tissue concentration. Accordingly, monochrome images of thermally damaged tissue exhibiting decreased birefringence provide important information about the tissue thermal history, which is often extremely difficult to measure globally during an experiment. Thus, a damage quantification algorithm based on monochrome tissue images exhibiting decreased values corresponding to estimated temperature distributions. The algorithm consists of initially time-averaging several video frames of the microscopic tissue image to reduce additive noise components and an additional multiplicative correction for optical nonuniformities. Subsequently, morphological close-opening and mean filtering of the tissue image is performed using a unit gain arbitrarily-sized square template, followed by background subtraction and scaling, producing the components required for the damage computation according to the volume fraction kinetic damage model. The algorithm has been applied to tissue images derived from an experimental protocol generating approximately linear thermal gradients along the axis perpendicular to the tissue surface plane and constant temperatures in the plane parallel to the tissue surface. The resulting thermally exposed tissue specimens exhibit decreased birefringence in damaged regions which is quantified and delineated automatically by this algorithm. Give the damage value at a specified tissue position, the temperature was also estimated. These temperature estimates approximate finite difference method numerical models of the experiment.
Three-dimensional reconstruction of a skeletal muscle cell from optical sections
T. R. Gowrishankar, Raphael C. Lee M.D., Chin-Tu Chen
Fluorescence microscopy assay of cell membranes is a practical way of studying the primary mechanisms of muscle cell damage in electrical injury. The three-dimensional distribution of pores in an electric field can be measured by 3-D imaging of the distribution of a fluorescent potentiometric dye. A confocal microscope with a variable-width slit detectors, a high frequency non-mechanical scanning system and multiple-line laser illumination is highly suitable for the optical measurement of membrane potential. In this paper, the characteristics of such a confocal microscope in terms of its 3-D optical transfer function (OTF) measured using fluorescent beads is presented. 3-D reconstruction of a skeletal muscle cell stained with the potentiometric dye, di-8-ANEPPS from optical sections is also demonstrated.
Recording of dual-labeled specimens using frequency-multiplexed confocal imaging and intensity-modulated two-wavelength excitation
Nils R.D. Aslund, Kjell Carlsson
We demonstrate the possibility to use intensity-modulated excitation and frequency multiplexing in combination with lock-in detection to make multi-parameter measurements with a confocal scanning laser microscope. This approach can be used, for example, when studying dual-labelled fluorescent specimens. Frequency-multiplexed confocal imaging has the potential to reduce a main problem in connection with detection of multiple-labelled specimens, namely cross-talk between the recorded signals from the two fluorophores. In addition, it has the advantage compared with the traditional method that the fluorescent spectra are utilized much more efficiently, thereby greatly improving signal quality.
Morphological Image Processing
icon_mobile_dropdown
Digital processing of histopathological aspects in renal transplantation
Arnaldo de Albuquerque Araujo, Marcos Carneiro de Andrade, Eduardo Alves Bambirra, et al.
We describe here our initial experience with the digital image processing of histopathological aspects from multiple renal biopsies of transplanted kidney in a patient treated with Cyclosporine (CsA), a powerful immunosupressor drug whose use has improved the chances of a successful vascularized organ transplantation (Tx). Unfortunately, CsA promotes morphological alterations to the glomerular structure of the kidneys. To characterize this process, glomeruli, tufts, and lumen areas distributions are measured. The results are presented in form of graphics.
Advanced image processing and modeling system for the analysis of cell micrographs in morphology
Qing Wei, Ch. Reme, Peter Stucki
Quantitative analysis of cell-level structures is attracting substantial attention in biomedical studies. This paper presents an advanced digital image processing and modelling system for the automatic analysis of cell micrographs in morphology. For reference, photoreceptors of the rat retina are used. The system implements a new index-based quantitative method developed for the evaluation of light induced lesions in retinal Rod Outer Segments (ROS). The automatic determination of indexes greatly simplifies the description of such damages and permits the statistical analysis of morphological data. A three dimensional synthetic model of retinal ROS was built to interactively simulate the damage mechanisms. The methods reported in this paper are implemented on a graphics super-workstation hardware platform that allows the interactive development of algorithms and procedures through quasi-instant visual feedback.
Numerical scales for classification designs
Manhot Lau, Takashi Okagaki
We present a model which is applied to frequencies in cells of an m X m contingency table (confusion matrix) of medical Pap test categories (results) compared to the final histological diagnosis--a typical pattern recognition process involving classification of objects in medicine. This model defines numerical scales, which represent morphological dissimilarities among different Pap test categories or classes. Using this model, we fit the m X m two-way discrete classification table of Pap test by maximizing the likelihood and computing the corresponding morphological scales of classes. The model predicts the probability of errors (or 'confusion') of the Pap test data numerically. It estimates the scales of the discrete categories or classes in Pap test results, and uses these scales to represent relative distances between classes. By relative distances, one can identify a frequently confused pair of classes in Pap tests. Similar application of the model to ovarian tumor diagnosis also identifies 'confused' pairs in tumor types of diagnosis. With our experience, the model we developed provides a quantitative means in assessing the appropriateness of classifications in pattern recognition. It identifies the causes of mis-classification of objects and characterizes the closeness of morphologies of different classes by numerical relative distances. These relative distances help us determining when two classes should be divided or combined and, thus, effectively identifying objects in a pattern recognition process.
Cardiac Motion Analysis I
icon_mobile_dropdown
Automatic tracking of SPAMM grid and the estimation of deformation parameters from cardiac MR images
In this paper, we present a new approach for the automatic tracking of SPAMM (Spatial Modulation of Magnetization) grid in Cardiac MR images and consequent estimation of deformation parameters. The tracking is utilized to extract grid points from MR images and to establish correspondence between grid points in images taken at consecutive frames. These correspondences are used with a thin plate spline model to establish a mapping from one image to the next. This mapping is then used for motion and deformation estimation. Spatio- temporal tracking of SPAMM grid is achieved by using snakes--active contour models with an associated energy functional. We present a minimization strategy which is suitable for tracking the SPAMM grid. By continuously minimizing their energy functionals, the snakes lock on to and follow the in-slice motion and deformation of the SPAMM grid.
Dynamic estimation of left-ventricular ejection fraction
Seema Jaggi, William Clement Karl, Alan S. Willsky
We investigate a method to obtain a dynamic estimate of left ventricular ejection fraction from a gated set of planar myocardial perfusion images. Ejection fraction is defined as the ratio of the fully contracted left-ventricular volume to the fully expended left-ventricular volume and is known as an effective gauge of cardiac function. This method is proposed as a safer and more cost-effective alternative to currently used radionuclide ventriculographic based techniques. To formulate this estimate of ejection fraction, we employ geometric reconstruction and recursive estimation techniques. The left ventricle is modelled as a dynamically evolving three- dimensional ellipsoid. The left-ventricular outline observed in the myocardial perfusion images are then modelled as two-dimensional ellipsoids, obtained as projections of the former three- dimensional ellipsoid. The ellipsoid that approximates the left ventricle is reconstructed using Rauch-Tung Striebel smoothing which combines the observed temporal set of projection images with an evolution model to produce the best estimate of the ellipsoid at any point in time given all the data. This investigation includes estimation of ejection fraction from both simulated and real data.
On the integration of image segmentation and shape analysis with its application to left-ventricle motion analysis.
This paper describes an integrated approach to image segmentation and shape analysis and its application to left ventricle motion and deformation analysis based on CT volumetric data. The proposed approach is different from traditional image analysis scenario in which the image segmentation and shape analysis are usually considered separately. The advantage of integrating the image segmentation with the shape analysis lies in the fact that the shape characteristics of the object can be used as effective constraints in the process of segmentation while original image data can be made useful along with the segmentation results in the process of shape analysis. In the case of left ventricle motion estimation through shape analysis based on CT volumetric data, such an integration can be applied to obtain the estimation results that are consistent with both given image data and a prior shape knowledge. The initial segmentation of the images is obtained through adaptive K-mean classification and the boundary of the given objects is computed based on such segmentation. The shape analysis is accomplished through fitting the boundary points to the surface modeling primitives. These two processes are integrated through the feedforward and feedback channels so that the surface fitting is weighted by the confidence measures of the boundary points and segmentation refinement is controlled by the result of surface modeling. Its application to left ventricle motion analysis is implemented through identifying the correspondences between the parameters of surface modeling primitives and the parameters of motion and deformation modeling. The preliminary results of the application show the promising improvement of such integrated approach over the traditional approaches.
Type 1 and 2 generalized morphological operators
Raj S. Acharya, Y. M. Ma
We present Type 1 and 2 Generalized Morphological operators. We also present and prove the properties of these operators. Generalized operators are useful when multiple characteristics of the geometric patterns are used for analysis. These multiple patterns are encoded via Partitioned Structuring Elements.
Cardiac Motion Analysis II
icon_mobile_dropdown
Finite-element-based deformable model for 3D biomedical image segmentation
Tim J. McInerney, Demetri Terzopoulos
This paper presents a physics-based approach to 3D image segmentation using a 3D elastically deformable surface model. This deformable 'balloon' is a dynamic model and its deformation is governed by the laws of nonrigid motion. The formulation of the motion equations includes a strain energy, simulated forces, and other physical quantities. The strain energy stems from a thin-plate under tension spline and the deformation results from the action of internal forces (which describe continuity constraints) and external forces (which describe data compatibility constraints). We employ the finite element method to discretize the deformable balloon model into a set of connected element domains. The finite element method provides an analytic surface representation. Furthermore, we use a finite element with nodal variables which reflect the derivative terms found in the thin-plate under tension energy expression. That is, the nodal variables include not only the nodal positions, but all of the first and second order partial derivatives of the surface as well. This information can be used to compute the volume, shape, and motion properties of the reconstructed biological structures. To demonstrate the usefulness of our 3D segmentation technique and demonstrate the dynamic properties of our model, we apply it to dynamic 3D CT images of a canine heart to reconstruct the left ventricle and track its motion over time.
Digital densitometric determination of relative coronary flow distributions
Albert F. Lubbers, Cornelis H. Slump, Corstiaan J. Storm
In cardiology coronary stenoses are in most case diagnosed by subjective visual interpretation of coronary artery structures in which contingent stenoses are assessed in terms of percentage luminal area reduction. This results in large intra- and interobserver variability in readings. Besides, also the correlation between the anatomical severity of coronary stenoses and their physiological significance is rather poor. A far better indication for the functional severity of coronary stenoses is coronary flow reserve (CFR). Although good results with densitometric CFR methods have been reported, in clinical practice the current techniques are time consuming and difficult in procedure. This paper presents a less demanding approach to determine densitometrically the relative flow distribution between the two main branches of the left coronary artery. The hypotheses is that comparison of the flow distributions under basal and hyperemic conditions of the heart muscle will provide useful clinical information concerning the physiological relevance of coronary stenoses. The hypotheses is tested by means of in vitro flow experiments with a glass flow phantom representing the proximal part of the left coronary artery. From properly positioned regions of interest (ROIs) within a sequence of temporal digital images time-density curves has been extracted. It is investigated whether the center of gravity of the density curves is a useful parameter to calculate relative flow rate differences. The flow study results together with a discussion will be presented in this paper.
Fluorescent image-tracking velocimetry algorithms for quantitative flow analysis in artificial organ devices
Ramanand Singh, Franklin D. Shaffer, Harvey Borovetz
A Fluorescent Image Tracking Velocimetry (FITV) system has been developed to produce two-dimensional velocity maps of flow fields. This system is capable of measurements at flow boundaries, such as the blood-biomaterial interfaces in artificial cardiac organs (in-vitro only). Three pulse-coding schemes--a single-pulse code, a dash-dot pulse code, and a constant- frequency pulse code--and associated image analysis algorithms have been developed and tested. These algorithms were applied to analyze flow in three types of artificial cardiac organs: the Novacor Left Ventricular Assist System, the Nimbus AxiPump, and the Hattler Intravenous Membrane Oxygenator. Results are presented and discussed in terms of image recognition. Despite the drawback of time-direction ambiguity, a constant-frequency pulse with a hybrid of constant-frequency and single-pulse analyses was found to provide optimum results for these applications.
Structural and Functional Imaging: Cardiac
icon_mobile_dropdown
Three-dimensional dynamic functional mapping of cardiac mechanics
Alexander M. Taratorin, Samuel Sideman, R. Beyar
The heart is an organ which functions by a periodic change of the three dimensional (3D) spatially distributed parameters; malfunctions of the heart's operating systems are manifested by changes of the spatio-temporal heart shape dynamics. This paper attempts to present a set of image analysis tools aimed at a thorough study of the left ventricular (LV) shape-function relationship based on Cine-CT data. Data processing methodologies aimed at analysis and interpretation of the dynamic 3D LV shape, thickening and motion are described. These include the computerized detection of the LV boundaries, dynamic reconstruction of 3D LV shape, the LV shape parameters and their spatio-temporal evolution. The procedures are demonstrated using Cine-CT images of the human LV is normal and pathological cases.
Three-dimensional reconstruction of the coronary arterial tree geometry--rationale and recent progress
We are developing a method for deconvolving overlapped (by 2/3rd) sets of multiple, thick (nominally 8 mm) slices scanned by an Imatron scanner. From these data we generate a 3D image with roughly equivalent resolution in three orthogonal directions. Because this is a tomographic method, the superposition of contrast in the cardiac chambers and pulmonary veins is not a problem and the density resolution is sufficient to permit imaging of arteries opacified with dilute (i.e., 5%) contrast agent, equivalent to that achievable with a intravenous bolus injection of contrast medium. To date we have demonstrated feasibility with postmortem hearts with dilute barium sulfate injected into the coronary arteries. Application in living patients will require modification of the Imatron scanner's table advance and ECG gated scanning software.
Structural and Functional Imaging: Pulmonary
icon_mobile_dropdown
Quantitative 3D reconstruction of airway and pulmonary vascular trees using HRCT
Susan A. Wood, John D. Hoford, Eric A. Hoffman, et al.
Accurate quantitative measurements of airway and vascular dimensions are essential to evaluate function in the normal and diseased lung. In this report, a novel method is described for three-dimensional extraction and analysis of pulmonary tree structures using data from High Resolution Computed Tomography (HRCT). Serially scanned two-dimensional slices of the lower left lobe of isolated dog lungs were stacked to create a volume of data. Airway and vascular trees were three-dimensionally extracted using a three dimensional seeded region growing algorithm based on difference in CT number between wall and lumen. To obtain quantitative data, we reduced each tree to its central axis. From the central axis, branch length is measured as the distance between two successive branch points, branch angle is measured as the angle produced by two daughter branches, and cross sectional area is measured from a plane perpendicular to the central axis point. Data derived from these methods can be used to localize and quantify structural differences both during changing physiologic conditions and in pathologic lungs.
Intensity correlation of ventilation-perfusion lung images
Antonio A. Costa, Carlos Vaz de Carvalho, M. Seixas, et al.
The purpose of this study is to develop a method to create new images, based on lung verification and perfusion raw nuclear medicine images obtained from a gamma camera, that may help the correlation of their intrinsic information. Another major topic of this study is the assessment of the usefulness of this method in the detection of lung malfunction.
Composite pseudocolor images: a technique to enhance the visual correlation between ventilation-perfusion lung images
Carlos Vaz de Carvalho, Antonio A. Costa, M. Seixas, et al.
Lung ventilation and perfusion raw nuclear medicine images obtained from a gamma camera can be difficult to analyze on a per si basis. A method to optimize the visual correlation between these images was established through the use of new combination images: Composite Pseudo-Color (CPC) images. The major topic of this study is the assessment of the usefulness of this method in the detection of lung malfunction.
Automated method for relating regional pulmonary structure and function: integration of dynamic multislice CT and thin-slice high-resolution CT
Jehangir K. Tajik, Steven D. Kugelmass, Eric A. Hoffman
We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.
Biomedical Visualization I
icon_mobile_dropdown
Using a prototype voxel for visualizing volumetric data
William Chris Buckalew
We present a method for visualizing volumetric data such as NMI or CAT-scan data that makes use of a data structure called the prototype voxel to create images very quickly on common workstation screens. The algorithm speeds up the standard process of casting rays through the volume data by precomputing a great deal of direction and interpolation information, assuming that all voxels are the same size and shape (which is normally the case for medical data sets). As rays are cast, this information, stored in the prototype voxel, is merely looked up when needed rather than being recomputed repeatedly. The prototype voxel must be computed only once for each data configuration; subsequent data sets which use the same size and shape of voxel can use the same prototype voxel information to speed rendering. This algorithm trades memory for speed: it uses 20 to 50 megabytes of memory (already becoming commonly available in modern workstations) for its speed improvements.
Variations on projection pursuit for multiparameter image visualization
G. Harikumar, Yoram Bresler
This paper addresses the effective display of multi-parameter medical diagnostic data, such as arises in MRI or in multimodality image-fusion. In such data, also known as a vector field, a vector value, rather than a scalar, is associated with each pixel. Each component of the vector corresponds to a different imaging modality, or a different combination of imaging parameters, and may provide different levels of contrast sensitivity between different tissues. While each of the different images may be misleading (as illustrated later by an example), in combination they may contain the correct information. Unfortunately, a human observer is not likely to be able to extract this information when presented with a parallel display of the distinct images. The development of a display technology that overcomes this difficulty by synthesizing a display method matched to the capabilities of the human observer is the subject of this paper.
Biomedical Visualization II
icon_mobile_dropdown
Design and implementation of a stand-alone workstation for stereotaxic neurosurgery
Udita Taneja, Cedric F. Walker
We have developed a Macintosh II based workstation for stereotaxis neurosurgery. Computed Tomography (CT) and Magnetic Resonance Images (MRI) are displayed in the NIH Image environment, customized to include stereotaxic and image manipulation utilities. To obtain the soft tissue detail of MRI images along with the bone definition and coordinate system of CT images, scans from the same patient are registered globally. A transformation is calculated to align the planes of the MRI images along CT planes, and this is then used to extract a MRI slice to correspond to a CT slice containing the target site. The extracted slice is warped, to correct for local misregistration, using corresponding landmarks in this and the CT image. The composite image, generated by overlaying the two, is used to calculate the Brown- Roberts-Wells stereotaxis frame angle settings needed to access the target. Path planning using reformatted CT slices is available as another option. Experimental validation using patient data has been done. This workstation has all the capabilities of a conventional CT workstation in addition to these stereotaxis routines.
Three-dimensional image processing on distributed memory paralled computers
Henri-Pierre Charles, Jian-Jin Li, Serge Miguet
Three dimensional image processing algorithms are highly time consuming. For the study of different algorithms on different parallel computers, we need a common, portable programming model. This article describes our common programming model and the results we obtain on two volumic image processing algorithms.
Virtual instrument: remote control and monitoring of an artificial heart driver
An H. Nguyen, David Farrar
A development of a virtual instrument based on the top-down model approach for an artificial heart driver is presented. Driver parameters and status were being dynamically updated on the virtual system at the remote station. The virtual system allowed the remote operator to interact with the physical heart driver as if he/she were at the local station. Besides use as an effective training tool, the system permits an expert operator to monitor and also control the Thoratec heart driver from a distant location. We believe that the virtual instrument for biomedical devices in general and for the Thoratec heart driver in particular, not only improves system reliability but also opens up a real possibility in reducing medical cost. Utilizing the top-down scheme developed recently for telerobotics, realtime operation in both instrument display and remote communication were possible via a low bandwidth telephone medium.
New medical workstation for multimodality communication systems
Stavros A. Kotsopoulos, Dimitris C. Lymberopoulos
The introduction of special teleworking and advanced remote expert consultation procedures in the modern multimodality medical communication systems, has an effective result in the way of confronting synchronous and asynchronous patient cases, by the physicians. The common denominator in developing the above procedures is to use special designated Medical Workstations (MWS). The present paper deals with the implementation of a MWS which facilitates the doctors of medicine to handle efficiently multimedia data in an ISDN communication environment.
Volume-rendering techniques in the assessment of cerebral activation
Joseph D. Biegel, Clinton S. Potter, Thomas C. Hill
Radionuclide imaging of the brain is used to study the effect of activation paradigms on cerebral function. In this study we investigate the neuro-activation due to a flickering visual stimulus as compared to a dark adapted baseline state. Neuroactivation is measured by SPECT brain imaging using the Tc99m brain perfusion imaging agent Tc99m Bicisate. (NeuroliteTM, a kit for the preparation of Tc99m Bicisate, is currently being distributed as an investigational new drug.) SPECT data generally consists of a series of 2D slices collected through the brain volume. Most analysis and interpretation schemes compare the results of imaging a subject injected without the stimulus with an image acquisition performed subsequent to injection in the presence of the activating stimulus. Common image analysis and interpretation schemes are performed using 2D slice data, often comparing data from only a single slice. We present results using a depth cueing volume rendering method for the display and comparison of full visual field activation and baseline (dark adapted) SPECT images. By rotating the rendered views of the volume, the 3D spatial structure of the data can be assessed.
Use of Multiple Images in Detection of Asymmetry and Developing Densities
icon_mobile_dropdown
Computer-aided detection and diagnosis of masses and clustered microcalcifications from digital mammograms
We are developing an 'intelligent' workstation to assist radiologists in diagnosing breast cancer from mammograms. The hardware for the workstation will consist of a film digitizer, a high speed computer, a large volume storage device, a film printer, and 4 high resolution CRT monitors. The software for the workstation is a comprehensive package of automated detection and classification schemes. Two rule-based detection schemes have been developed, one for breast masses and the other for clustered microcalcifications. The sensitivity of both schemes is 85% with a false-positive rate of approximately 3.0 and 1.5 false detections per image, for the mass and cluster detection schemes, respectively. Computerized classification is performed by an artificial neural network (ANN). The ANN has a sensitivity of 100% with a specificity of 60%. Currently, the ANN, which is a three-layer, feed-forward network, requires as input ratings of 14 different radiographic features of the mammogram that were determined subjectively by a radiologist. We are in the process of developing automated techniques to objectively determine these 14 features. The workstation will be placed in the clinical reading area of the radiology department in the near future, where controlled clinical tests will be performed to measure its efficacy.
Detection of breast asymmetry using anatomical features
Peter Miller, Susan M. Astley
We present a new approach to the detection of breast asymmetry, an important radiological sign of cancer. The conventional approach to this problem is to search for brightness or texture differences between corresponding locations on left and right breast images. Due to the difficulty in accurately identifying corresponding locations, asymmetry cues generated in this way are insufficiently specific to be used as prompts for small and subtle abnormalities in a computer-aided diagnosis system. We have undertaken studies to discover more about the visual cues utilized by radiologists. We propose a new automatic method for detecting asymmetry based on the comparison of corresponding anatomical structures, which are identified by an automatic segmentation of breast tissue types. We describe a number of methods for comparing the shape and grey-level distribution of these regions, and we have achieved promising results by combining evidence for asymmetry.
Digital differential radiography (DDR): a new diagnostic procedure for locating neoplasms, such as breast cancers, in soft, deformable tissues
Andrzej K. Mazur, E. J. Mazur, Richard Gordon
We introduce a new method, the Image Correlation Technique (ICT), that automatically estimates the transformation of deformations (including large ones) between an image and a distorted version of that image. The outcome of the method is a displacement field. The geometric distortion that occurs between an undeformed (reference) and a deformed picture is, in general, unknown. Using new algorithms and simulation annealing, a well established global optimization technique, by rearranging pixels from a picture frame taken prior to the deformation (the reference picture), we arrive at the pixel arrangement represented by the picture frame taken after the deformation. The method works equally well for linear and non- linear cases. We present examples of deformation estimation for pairs of two-dimensional images. However, the method can be readily applied to three-dimensional objects such as those imaged by CT. By using ICT, we propose a new diagnostic procedure, Digital Differential Radiography (DDR), to find neoplasms, physiological liquid drainage, swelling or tissue necrosis, etc. We present examples of the deformation estimation for a pair of two- dimensional images of breast tissue and the result of the divergence calculation to pinpoint simulated tissue growth abnormalities. This new procedure for automatic detection of growing masses may be applicable to all imaging modalities, especially Computed Tomography and Magnetic Resonance Imaging.
Digitization and Interpretation
icon_mobile_dropdown
Mammographic screening: radiological performance as a precursor to image processing
Alastair G. Gale, A. R. M. Wilson, E. J. Roebuck
A key issues in the introduction and development of appropriate image processing techniques in mammography is the establishment of the current performance of radiologists in the area. This is necessary because the utility of machine vision approaches is largely validated by comparison with known radiological performance measures (which may well be variable) on the same set of cases. Furthermore the determination of weaknesses in existing human mammographic interpretative ability will demonstrate where machine vision approaches are currently most needed and thus likely to be of maximum benefit in a breast screening program. Following the introduction of breast screening in the U.K. a national self-assessment program has been implemented for all radiologists involved in this specialty. One of the outcomes of this program is the determination of radiological performance variations on this standard task. It is argued that these demonstrate the need for any machine vision approach to take such individuality into account before it can be implemented usefully.
Glandular tissue contrast in CCD-digitized mammograms
Carolyn Kimme-Smith, Siamik Dardashti, Lawrence W. Bassett M.D.
Contrast resolution in CCD digitized mammograms is difficult because of storage requirements for 12 bit gray level resolution and because of the logarithmic amplification needed for optical densities above 1.6. By assigning gray level ranges to specific optical density ranges based on a region of interest selection during a pre-digitization scan, contrast for lower optical densities can be preserved. Comparison of the equivalence of film and digital images was confirmed by 6 radiologists reading 46 biopsy proven films (12 cases). Diagnostic performance was also tested and showed similar error rates (17%) for both modalities.
Comparison of mammogram images using different quantization methods
E. T. Y. Chen, James Lee, Alan C. Nelson
Special devices with higher quantization resolution are needed to display or process most medical images. In this paper, we compare three different quantization approaches for mammogram images in order to process them in 8 bits/pixel resolution. Since microcalcification is one of the most important indications of risk of breast cancer, a simple shift operation (uniform quantization) cannot retain this vital information. Quantization based on the local histogram will give better results but at the price of more computation.
Classifying mammograms by density: rationale and preliminary results
Saki Hajnal, Paul Taylor, Marie-Helene Dilhuydy, et al.
We are doing research on computerized techniques for classifying mammograms as dense or fatty. The hypothesis is that areas of dense tissues are the major factor making certain mammograms harder for both radiologists and computers to interpret. Automatic identification of dense mammograms might therefore permit better use of the time and skills of expert radiologists. Concentrating on the fatty mammograms could also improve the scope for computer-aided detection of abnormalities. Mammograms were independently classified by two radiologists, with a high level of inter-observer agreement. A number of local statistical and texture measures were compared, initially on manually-placed patches of the digitized images. Two strategies for automating the procedure were then compared. The most successful measure (based on grey-level skewness in small tiles) and strategy (automatic patch placement) yield an almost automatic procedure which produces a promising separation between the classes. Evaluation of a fully-automated procedure is in progress.
Use of Wavelets for Mammogram Image Processing
icon_mobile_dropdown
Hierarchical feature extraction for computer-aided analysis of mammograms
Hakan Barman, Goesta H. Granlund
A framework for computer-aided analysis of mammograms is described. General computer vision algorithms are combined with application specific procedures in a hierarchical fashion. The system is under development and is currently limited to detection of a few types of suspicious areas. The image features are extracted by using feature extraction methods where wavelet techniques are utilized. A low-pass pyramid representation of the image is convolved with a number of quadrature filters. The filter outputs are combined according to simple local Fourier domain models into parameters describing the local neighborhood with respect to the model. This produces estimates for each pixel describing local size, orientation, Fourier phase, and shape with confidence measures associated to each parameter. Tentative object descriptions are then extracted from the pixel-based features by application specific procedures with knowledge of relevant structures in mammograms. The orientation, relative brightness and shape of the object are obtained by selection of the pixel feature estimates which best describe the object. The list of object descriptions is examined by procedures, where each procedure corresponds to a specific type of suspicious area, e.g., clusters of microcalcifications.
Wavelet packets applied to mammograms
Walter B. Richardson Jr.
A multiresolution analysis based on wavelets is particularly effective for signals with marked discontinuities, such as microcalcifications produce in mammograms. The projection methods that generate the resumes and details can yield an entire family of orthonormal bases, or wavelet packets, which can be superior to wavelets for certain classes of signals. Results of applying wavelet packets to snippets from several mammograms are presented as a method of data compression, allowing reduction of the digitized data before applying pattern recognition techniques.
Tree-structured nonlinear filter and wavelet transform for microcalcification segmentation in mammography
Wei Qian, Laurence P. Clarke, Maria Kallergi, et al.
The development of an extensive array of algorithms for both image enhancement and feature extraction for microcalcification cluster detection is reported. Specific emphasis is placed on image detail preservation and automatic or operator independent methods to enhance the sensitivity and specificity of detection and that should allow standardization of breast screening procedures. Image enhancement methods include both novel tree structured non-linear filters with fixed parameters and adaptive order statistic filters designed to further improve detail preservation. Novel feature extraction methods developed include both two channel tree structured wavelet transform and three channel quadrature mirror filter banks with multiresolution decomposition and reconstruction specifically tailored to extract MCC's. These methods were evaluated using fifteen representative digitized mammograms where similar sensitivity (true positive (TP) detection rate 100%) and specificity (0.1 - 0.2 average false positive (FP) MCC's/image) was observed but with varying degrees of detail preservation important for characterization of MCC's. The image enhancement step proved to be very critical to minimize image noise and associated FP detection rates for MCC's or individual microcalcifications.
Adaptive multiscale processing for contrast enhancement
Andrew F. Laine, Shuwu Song, Jian Fan, et al.
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms within a continuum of scale space and used to enhance features of importance to mammography. Choosing analyzing functions that are well localized in both space and frequency, results in a powerful methodology for image analysis. We describe methods of contrast enhancement based on two overcomplete (redundant) multiscale representations: (1) Dyadic wavelet transform (2) (phi) -transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by non-linear, logarithmic and constant scale-space weight functions. Multiscale edges identified within distinct levels of transform space provide a local support for enhancement throughout each decomposition. We demonstrate that features extracted from wavelet spaces can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Knowledge-Based Methods I
icon_mobile_dropdown
Knowledge-based classification and tissue labeling of magnetic resonance images of human brain
ChunLin Li, Lawrence O. Hall, Dmitry B. Goldgof
This paper presents a knowledge based approach to automatic classification and tissue labeling of 2D magnetic resonance (MR) images of human brain. The system consists of two components: an unsupervised clustering algorithm and an expert system. MR brain data is initially segmented by the unsupervised algorithm, then, the expert system locates a focus-of- attention tissue or cluster and analyzes it by matching it with a model or searching in it for an expected feature. The focus-of-attention tissue location and its analysis are repeated until a tumor is found or all tissues are labeled. Abnormal slices are labeled by reclustering regions of interest with knowledge accumulated from previous analysis. The domain knowledge contains tissue distribution in feature space acquired with a clustering algorithm, and tissue models. Default reasoning is used to match a qualitative model with its instances. The system has been tested with fifty-three slices acquired at different times by two different scanners.
Atlas-guided segmentation of brain images via optimizing neural networks
Gene R. Gindi, Anand Rangarajan, I. G. Zubal
Automated segmentation of magnetic resonance (MR) brain imagery into anatomical regions is a complex task that appears to need contextual guidance in order to overcome problems associated with noise, missing data, and the overlap of features associated with different anatomical regions. In this work, the contextual information is provided in the form of an anatomical brain atlas. The atlas provides defaults that supplement the low-level MR image data and guide its segmentation. The matching of atlas to image data is represented by a set of deformable contours that seek compromise fits between expected model information and image data. The dynamics that deform the contours solves both a correspondence problem (which element of the deformable contour corresponds to which elements of the atlas and image data?) and a fitting problem (what is the optimal contour that corresponds to a compromise of atlas and image data while maintaining smoothness?). Some initial results on simple 2D contours are shown.
Three-dimensional image segmentation using neural networks
Jin-Shin Chou, Chin-Tu Chen, Wei-Chung Lin
We have integrated a neural network model. Kohonen's self-organizing feature maps, with the idea of fuzzy sets and applied this model to the problem of 3-D image segmentation. In the proposed method, a Kohonen network provides the basic structure and update rule, whereas fuzzy membership values control the learning rate. The calculation of learning rate is based on a fuzzy clustering algorithm. The experimental results show that the speed of convergence is fast. The major strength of the proposed approach is its unsupervised nature. Moreover, the computer memory requirement is smaller and the computation time is less than that of a conventional 3-D region-based method.
Staining independent Bayes classifier for automated cell pattern recognition
Xinhua Zhuang, James Lee, Yan Huang, et al.
Designing the optimal Bayes classifier for automated cell pattern recognition faces two major difficulties: (1) modeling and learning the conditional probabilities P(cell features--cell type) (2) developing staining independent strategies to handle staining dependent cell features while learning those conditional probabilities. In this paper, we will show such modeling and learning techniques as well as staining independent strategies. The result of the strategies tested on an automated system designed for cervical smear screening will also be reported.
Knowledge-Based Methods II
icon_mobile_dropdown
Knowledge-based segmentation and feature analysis of hand and wrist radiographs
Nicholas David Efford
The segmentation of hand and wrist radiographs for applications such as skeletal maturity assessment is best achieved by model-driven approaches incorporating anatomical knowledge. The reasons for this are discussed, and a particular frame-based or 'blackboard' strategy for the simultaneous segmentation of the hand and estimation of bone age via the TW2 method is described. The new approach is structured for optimum robustness and computational efficiency: features of interest are detected and analyzes in order of their size and prominence in the image, the largest and most distinctive being dealt with first, and the evidence generated by feature analysis is used to update a model of hand anatomy and hence guide later stages of the segmentation. Closed bone boundaries are formed by a hybrid technique combining knowledge-based, one-dimensional edge detection with model-assisted heuristic tree searching.
Enhanced tree-classifier performance by inversion with application to pap smear screening data
E. T. Y. Chen, James Lee, Alan C. Nelson
In this paper, we present an inversion method to enhance a binary decision tree classifier using boundary search of training samples. We want to enhance the training at those points which are close to the boundaries. Selection of these points is based on the Euclidean distance from those centroids close to classification boundaries. The enhanced training using these selected data was compared with training using randomly selected samples. We also applied this method to improve the classification of pap smear screening data.
Bayesian belief networks for medical image recognition
Chien-Shung Hwang, Wei-Chung Lin, Chin-Tu Chen, et al.
In this paper, we propose the interval-based Bayesian belief networks and then use them as the inference scheme in a medical image recognition system. To integrate knowledges from various sources, the blackboard architecture is used as the framework. The proposed system consists of three phases. In phase one, three correlated images acquired from x-ray CT, proton density and T2-weighted MRI of a human brain are presented to the system. A signal-based segmentation algorithm is then employed to divide each image into regions of homogeneous attributes. In phase two, the system tries to identify the major anatomical structures and locate the slice in the model that is most similar to the image set under study. To accomplish this work, one Bayesian belief network is constructed to integrate evidence from various sensor slices and the feature spaces for each anatomy and the other belief network is designed for opportunistic control in the blackboard system. In phase three, the selected model slice is used to guide the process of refining the recognized anatomies.
Unsupervised fuzzy segmentation of 3D magnetic resonance brain images
Robert Paul Velthuizen, Lawrence O. Hall, Laurence P. Clarke, et al.
Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.
Image Analysis I
icon_mobile_dropdown
Robust approach to ocular fundus image analysis
Guido Tascini, Giorgio Passerini, Paolo Puliti, et al.
The analysis of morphological and structural modifications of retinal blood vessels plays an important role both to establish the presence of some systemic diseases as hypertension and diabetes and to study their course. The paper describes a robust set of techniques developed to quantitatively evaluate morphometric aspects of the ocular fundus vascular and micro vascular network. They are defined: (1) the concept of 'Local Direction of a vessel' (LD); (2) a special form of edge detection, named Signed Edge Detection (SED), which uses LD to choose the convolution kernel in the edge detection process and is able to distinguish between the left or the right vessel edge; (3) an iterative tracking (IT) method. The developed techniques use intensively both LD and SED in: (a) the automatic detection of number, position and size of blood vessels departing from the optical papilla; (b) the tracking of body and edges of the vessels; (c) the recognition of vessel branches and crossings; (d) the extraction of a set of features as blood vessel length and average diameter, arteries and arterioles tortuosity, crossing position and angle between two vessels. The algorithms, implemented in C language, have an execution time depending on the complexity of the currently processed vascular network.
Building structural descriptions from coronal magnetic resonance images
Subha V. Raman, Kim L. Boyer
This paper presents several components of a system designed for the automated segmentation of coronal magnetic resonance images. Structural descriptions of several anatomical features are built in a hierarchial fashion. We begin with low-level edge detection and a constant curvature decomposition. This is followed by a graph-theoretic approach to generate structure hypotheses. After defining an object-centered coordinate system, we develop unary attributes and binary relations to make hypothesis evaluations and classifications. We handle the problem of describing three-dimensional structures from two-dimensional information using a novel slice-to-slice matching approach. These 3-D descriptions can later be used to build a topologically-structured modelbased which has broad applications. We also define a general structure matching framework which greatly simplifies the problem of incorporation new information into the system.
Geometrical models for the analysis of 3D anatomic shapes: application to bone structures
Christian Roux, V. Burdin, Christian Lefevre
This paper presents a framework for the description and the analysis of three-dimensional (3D) elongated shapes based on appropriate geometrical models. Its application to the analysis of long bone structures like ulna and radius is described. Elongated shapes can be decomposed into a space curve (the medial axis) and a space surface (the straight surface). The medial axis is described by means of curvature and torsion. A novel torsion image is presented which avoids computing any third derivative. The description of the straight surface is based on an ordered set of series of Fourier descriptors, each series representing a 2D contour of the structure. These descriptors are endowed with completeness, continuity and stability properties, and with some geometrical invariances. Various applications are derived from this model: compression and reconstruction of the shape, comparison of 3D shapes, and segmentation into 3D primitives.
Statistical approach for detecting cancer lesions from prostate ultrasound images
A. Glen Houston, Saganti B. Premkumar, Richard J. Babaian, et al.
Sequential digitized cross-sectional ultrasound image planes of several prostates have been studied at the pixel level during the past year. The statistical distribution of gray scale values in terms of simple statistics, sample means and sample standard deviations, have been considered for estimating the differences between cross-sectional image planes of the gland due to the presence of cancer lesions. Based on a variability measure, the results for identifying the presence of cancer lesions in the peripheral zone of the gland for 25 blind test cases were found to be 64% accurate. This accuracy is higher than that obtained by visual photo interpretation of the image data, though not as high as our earlier results were indicating. Axial-view ultrasound image planes of prostate glands were obtained from the apex to the base of the gland at 2 mm intervals. Results for the 25 different prostate glands, which include pathologically confirmed benign and cancer cases, are presented.
Morphology and Intensity Map Techniques for Detecting Calcifications
icon_mobile_dropdown
Approach to automated screening of mammograms
Dragana P. Brzakovic, P. Brzakovic, Milorad Neskovic
This paper describes an adaptive image segmentation method that detects cancerous changes in mammograms. A mammogram containing abnormal signs changes is segmented into 'suspicious regions' and normal tissue. The method employs hierarchical region growing that uses pyramidal multiresolution image representation. The relationships between pixels at different resolution levels are established using a fuzzy membership function, thus enabling detection of very small and/or low contrast details in highly textured background. The paper discusses two versions of the method, the first is aimed at detection of microcalcifications, and the second at detection of benign and malign nodules. Both versions are fully automated and differ in selection of parameters of the fuzzy membership function. The algorithm was evaluated using synthetically generated objects superimposed on normal mammograms, and real mammograms. Based on this evaluation, the method has potential to be used as an aid to medical experts in establishing the correct diagnosis.
Rule-based morphological feature extraction of microcalcifications in mammograms
Dongming Zhao
In this paper, we discuss a method using combined morphological filtering and rule-based feature extraction for detecting microcalcifications in mammograms. After preprocessing of digitized mammographic images where salt-pepper noise is removed, morphological operations are applied to extract local background of the images. The extracted background is then used for an adaptive thresholding on the images. The objective of the adaptive thresholding is to separate local gray-scale variations from the image, since microcalcification features are most likely embedded in local variations. A threshold is set to binarize the local variations. After thresholding, morphological size filters are applied to extract the features related to mirocalcifications. To facilitate the feature extraction process, a rule-based selection procedure is developed based on local density distribution of the binarized feature image, local variation deviation, and connectivity of suspicious spots. The limited experimental results show that the approach, while combined with other image analysis and pattern classification techniques, can provide a useful tool for assisting mammographic diagnosis processes.
Automation in mammography: computer vision and human perception
Susan M. Astley, I. Hutt, S. Adamson, et al.
Mammographic screening programs generate large numbers of highly variable, complex images, most of which are unequivocally normal. When present, abnormalities may be small or subtle. In this paper we focus on the detection and analysis of mammographic microcalcifications. We present results of experiments to determine factors affecting radiologists' perception of microcalcifications, and to investigate the effects of attention-cueing on detection performance. Our results show that radiologists' performance can be significantly improved with the use of prompts generated from automatically-detected microcalcification clusters. We also describe a new method for the identification and delineation of mammographic abnormalities for training and test purposes, based on the analysis of multiple high quality X-ray projections of excised lesions. Biopsy specimens are secured inside a rigid tetrahedron, the edges of which provide a reference frame to which the localizations of features can be related. Once a three-dimensional representation of an abnormality has been formed, it can be rotated to resemble the appearance in the original mammogram.
Automated recognition of microcalcification clusters in mammograms
The widespread and increasing use of mammographic screening for early breast cancer detection is placing a significant strain on clinical radiologists. Large numbers of radiographic films have to be visually interpreted in fine detail to determine the subtle hallmarks of cancer that may be present. We developed an algorithm for detecting microcalcification clusters, the most common and useful signs of early, potentially curable breast cancer. We describe this algorithm, which utilizes contour map representations of digitized mammographic films, and discuss its benefits in overcoming difficulties often encountered in algorithmic approaches to radiographic image processing. We present experimental analyses of mammographic films employing this contour-based algorithm and discuss practical issues relevant to its use in an automated film interpretation instrument.
Image Processing Techniques
icon_mobile_dropdown
Restoration of mammographic images in the presence of signal-dependent noise
Farzin Aghdasi, Rabab K. Ward, Branko Palcic
We developed and implemented two locally adaptive image smoothing filters to improve the signal to noise ratio of digitized mammogram images. The application of these smoothing filters in conjunction with the deconvolution of the images results in better visualization of image details. Previous efforts in restoration of digitized mammograms have assumed a stationary image with uncorrelated white Gaussian noise. In this work we considered a more realistic case of a non-stationary image model and signal-dependent noise of photonic and film-grain origins. Both the camera blur and the MTF of the screen-film combination were considered. The camera noise may be minimized through averaging and background subtraction. The signal-dependent nature of the radiographic noise was modelled by a linear shift-invariant system and the relative strengths of various noise sources were compared. The deconvolution filter was designed to respond to the particular form of the noise in the system based on the Minimum Mean Squared Error (MMSE) criteria. Of the two smoothing filters the Bayesian estimator was found to outperform the adaptive Wiener filter. Filters were implemented in a real time processing environment using our mammographic image acquisition and analysis system.
Regional contrast enhancement and data compression for digital mammographic images
Ji Chen, Michael J. Flynn, Murray Rebner
The wide dynamic range of mammograms poses problems for displaying images on an electronic monitor and printing images through a laser printer. In addition, digital mammograms require a large amount of storage and network transmission bandwidth. We applied contrast enhancement and data compression to the segmented images to solve these problems. Using both image intensity and Gaussian filtered images, we separated the original image into three regions: the interior region, the skinline transition region, and the exterior region. In the transition region, unsharp masking process was applied and an adaptive density shift was used to simulate the process of highlighting with a spot light. The exterior region was set to a high density to reduce glare. The interior and skinline regions are the diagnostically informative areas that need to be preserved. Visually lossless coding was done for the interior by the wavelet or subband transform coding method. This was used because there are no block artifacts and a lowpass filtered image is generated by the transform. The exterior region can be represented by a bit-plane image containing only the labeling information or represented by the lower resolution transform coefficients. Therefore, by applying filters of different scales, we can accomplish region segmentation and data compression.
Adaptive coding algorithm for a mammogram image database
Isabelle E. Magnin, Olivier Baudin, Atilla M. Baskurt, et al.
The image data base SENOBASE, including 400 documented digital images, is proposed and described. A set of 100 images, extracted from SENOBASE, is then coded using an adaptive DCT coding technique. This method takes into account the main characteristics of mammograms, mainly their low contrast and their complex structure. A systematic evaluation study is performed by an expert, in order to evaluate the image content after the coding and decoding steps. The result is analyzed in terms of diagnosis performance.
Radiographic systems evaluation: obtaining the MTF by simulation
Homero Schiabel, Annie France Frere
Although the transfer function method has been considered the most accurate and complete in evaluating radiological imaging systems performance, it is limited only to a few well equipped laboratories or radiological centers due to the sophisticated experimental apparatus used. Therefore, this paper proposes a new simpler way of performing this quality evaluation, using a computer simulation. This new method provides the MTF determination from a focal spot size and shape knowledge, which can be determined from a pinhole image. Simulation tests were made for an actual mammographic system and for focal spot sizes presented in a previous work by Doi, Fromes & Rossmann. The simulation results were in agreement with those obtain by the conventional method.
Pattern Recognition and Classification I
icon_mobile_dropdown
Recognition of clustered microcalcifications using a random field model
A parallel algorithm has been developed to detect clustered microcalcifications in digital mammography. Labeling of the image is performed by a deterministic relaxation scheme in which both image data and prior beliefs are weighted simultaneously using a Bayesian scheme. The image data is represented by parameter images representing local contrast and shape. A random field models contextual relations between pixel labels, which enables bringing in prior knowledge about the spatial properties of the structures to be detected. By defining long range interaction between background and calcification labels the detector can be tuned to be more sensitive inside clusters than outside, ensuring that isolated spots will only be interpreted as calcifications if they are in the neighborhood of others. In this paper attention is focused on the random field model. Different choices of the energy function defining the interaction model are investigated experimentally using a set of 40 mammograms digitized at 2 k2.
Evaluation of stellate lesion detection in a standard mammogram data set
W. Philip Kegelmeyer Jr.
We have previously reported on a method for the automatic detection of stellate lesions in digitized mammograms, and on our tests of that method on image data with known diagnoses. This earlier investigation was based on a limited set of 10 test images, each with a stellate lesion. As our approach is one of supervised training, half of the data was used as a training set, and so the performance results were necessarily coarse. Accordingly there is value in testing these algorithms on a larger data set that will not only provide more lesions but also truly undiseased tissue. A new mammogram data set addresses both of these concerns, as it contains examples of twelve stellate lesions, as well as fifty examples of entirely normal mammograms. Further, as this data set has been made widely available to all interested researchers, performance results for specific algorithms on this data set are of particular value, as they can be directly compared to the performance of other algorithms similarly applied. Thus the main contribution of the current paper is to exhaustively evaluate the performance of this stellate lesion detection algorithm on the new mammogram data set. A secondary aim is to present a revision of the spatial integration step which generates the final report of a lesion's existence, one that facilitates the extraction of ROC performance statistics.
Automatic detection and classification system for calcifications in mammograms
Liang Shen, Rangaraj M. Rangayyan, J. E. Leo Desautels
In this paper, we propose an automatic calcification detection and classification system. First, a new multi-tolerance region growing method is proposed for the detection of potential calcification regions and extraction of their contours. The method employs a distance metric computed on feature sets including measures of shape, center of gravity, and size obtained for various growth tolerance values in order to determine the most suitable parameters. Then, shape features from moments, Fourier descriptors, and compactness are computed based upon the contours of the regions. Finally, a two-layer perceptron is utilized for the purpose of calcification classification with the shape features. In our preliminary study, detection rates were 87% and 85%, and correct classification rates were 94% and 87% for 54 benign calcifications and 241 malignant calcifications, respectively. The proposed system should provide considerable help to radiologists in the diagnosis of breast cancer.
Clinical workstation for digital mammography
Anthony Giles, Arnold R. Cowen, Geoff J. S. Parkin
Breast cancer is the commonest form of cancer amongst women in the UK. Each year there are approximately 24,000 new cases and 15,000 deaths from the disease. Following the submission of the Forrest Report in 1986 a national breast cancer screening program has recently been established in the UK. FAXIL has been asked to investigate areas where the implementation of digital imaging technology may facilitate and/or enhance future cycles of the program. One element of the research program, the development of a clinical workstation for digital mammography, is described here.
Pattern Recognition and Classification II
icon_mobile_dropdown
Artificial-neural-network-based classification of mammographic microcalcifications using image structure features
Atam P. Dhawan, Yateen S. Chitre, Myron Moskowitz
Mammography associated with clinical breast examination and self-breast examination is the only effective and viable method for mass breast screening. It is however, difficult to distinguish between benign and malignant microcalcifications associated with breast cancer. Most of the techniques used in the computerized analysis of mammographic microcalcifications segment the digitized gray-level image into regions representing microcalcifications. We present a second-order gray-level histogram based feature extraction approach to extract microcalcification features. These features, called image structure features, are computed from the second-order gray-level histogram statistics, and do not require segmentation of the original image into binary regions. Several image structure features were computed for 100 cases of `difficult to diagnose' microcalcification cases with known biopsy results. These features were analyzed in a correlation study which provided a set of five best image structure features. A feedforward backpropagation neural network was used to classify mammographic microcalcifications using the image structure features. The network was trained on 10 cases of mammographic microcalcifications and tested on additional 85 `difficult-to-diagnose' microcalcifications cases using the selected image structure features. The trained network yielded good results for classification of `difficult-to- diagnose' microcalcifications into benign and malignant categories.
Classification of ductal carcinoma in-situ by image analysis of calcifications from mammograms
Jon Parker, David R. Dance, David H. Davies, et al.
Image analysis methods have been developed to characterize calcifications associated with Ductal Carcinoma in-Situ (DCIS), and to differentiate between those having comedo or non- comedo histology. Cases were selected from the U.K. breast screening program, and in each case the histology and a magnified mammographic view were obtained. The films were digitized at 25 micron sampling size and 8 bit grey level resolution. Calcifications were manually segmented from the normal breast background, and a radiologist, experienced in breast screening, checked the labelling of a calcifications. An algorithm was developed to classify firstly the individual objects within a film, and secondly the film itself. The algorithm automatically selected the combination of features giving the least estimated Bayes error for a set of object-oriented features evaluated for each calcification. The k-nearest neighbors statistical approach was then used to classify individual objects giving a ratio of comedo to non-comedo objects for a set of training films. Films were classified by applying a threshold to this ratio. In the classification of typical comedo from typical non-comedo the success rate of the algorithm was 100% for a training set of 4 cases and test set of 16 cases.
Comparative evaluation of pattern recognition techniques for detection of microcalcifications
Kevin S. Woods, Jeffrey L. Solka, Carey E. Priebe, et al.
Computer detection of microcalcifications in mammographic images will likely require a multi-stage algorithm that includes segmentation of possible microcalcifications, pattern recognition techniques to classify the segmented objects, a method to determine if a cluster of calcifications exists, and possibly a method to determine the probability of malignancy. This paper will focus on the classification of segmented objects as being either (1) microcalcifications or (2) non-microcalcifications. Six classifiers (2 Bayesian, 2 dynamic neural networks, a standard backpropagation network, and a K-nearest neighbor) are compared. Methods of segmentation and feature selection are described, although they are not the primary concern of this paper. A database of digitized film mammograms is used for training and testing. Detection accuracy is compared across the six methods.
Nonlinear indicators of malignancy
Christine J. Burdett, Harold G. Longbotham, Mita D. Desai, et al.
This paper investigates the use of fractional dimension analysis and nonlinear filters for quantifying the degree of lesion diffusion in mammograms. The fractal method involves computing the fractal dimension over the entire lesion. Based on the observation that malignant lesions usually exhibit rougher intensity profiles and often have more toruous boundaries than benign lesions, the fractal dimension, which is a popular means of quantifying the degree of image/surface roughness, is proposed as a natural tool to assist in the diagnosis of malignancy. In this work, the fractal dimension of the image intensity surface is estimated using the fractional Brownian motion model. The nonlinear analysis was performed on horizontal and vertical lines (one-dimensional data) through the area of interest. These scan lines were also processed by a nonlinear (maximum) transformation as a means of reducing the dimensionality of the data, to aid in clarifying the degree of diffusion present in the data. For benign lesions little diffusion will be present, whereas malignant lesions generally display a higher degree of diffusion. Results of these techniques are applied on several malignant and benign lesions are presented, using mammogram X-rays digitized to a 512 X 512 pixel resolution and 8-bits of gray-scale resolution.
Nuclear feature extraction for breast tumor diagnosis
W. Nick Street, W. H. Wolberg, O. L. Mangasarian
Interactive image processing techniques, along with a linear-programming-based inductive classifier, have been used to create a highly accurate system for diagnosis of breast tumors. A small fraction of a fine needle aspirate slide is selected and digitized. With an interactive interface, the user initializes active contour models, known as snakes, near the boundaries of a set of cell nuclei. The customized snakes are deformed to the exact shape of the nuclei. This allows for precise, automated analysis of nuclear size, shape and texture. Ten such features are computed for each nucleus, and the mean value, largest (or 'worst') value and standard error of each feature are found over the range of isolated cells. After 569 images were analyzed in this fashion, different combinations of features were tested to find those which best separate benign from malignant samples. Ten-fold cross-validation accuracy of 97% was achieved using a single separating plane on three of the thirty features: mean texture, worst area and worst smoothness. This represents an improvement over the best diagnostic results in the medical literature. The system is currently in use at the University of Wisconsin Hospitals. The same feature set has also been utilized in the much more difficult task of predicting distant recurrence of malignancy in patients, resulting in an accuracy of 86%.
Image Analysis II
icon_mobile_dropdown
Postprocessing cylindrical surface data of a subject's head
Joseph H. Nurre
Full field surface data of cylindrically shaped objects, such as a human's head, can be quickly achieved by rotating a laser scanner and imaging system about the subject. The measurements are, however, subject to random noise. The noise can be due to stray external light sources or unexpected surface reflection and refraction characteristics. The noise presents itself as high frequency surface roughness and surface spikes. To smooth the data, a regularization technique is employed. First, spikes in the data were identified with a medium filter. The cylindrical format of the data required special consideration when implementing the medium filter. Surface patches further away from the cylindrical axis had fewer sampling points for determining a medium. With the spikes identified, regularization could be used to reduce the discontinuities of the surface data by interpolating these points to an appropriate location.
Verification techniques for x-ray and mammography applications
Stavros A. Kotsopoulos, Dimitris C. Lymberopoulos
The integration of Medical Information Environment demands the study and development of high speed data communication systems with special designed 'endsystems' (MWS, etc.) for flexible and reliable data transmission/reception, handling and manipulation. An important parameter which affects the overall system's performance is the 'quality factor' of the communicated medical data produced by a wide range of modern modalities. The present paper describes a set of tests, done in a medical communication network based on a teleworking platform, in order to optimize the sensitivity parameters of the modalities by remote fine re-adjustments guided by experts.
Hierarchical top-down shape classification based on multiresolution skeletons
Su-Lin Yang, Peter D. Scott, Cesar Bandera
This paper describes an algorithm for hierarchical shape classification based on multiresolution skeletons, and its application to the detection and identification of objects in noisy, cluttered imagery. A pyramid of stored two dimensional templates is employed to identify the object class, its location and spatial orientation. The skeleton of the object is selected for shape representation in this paper since it is a good 2D shape descriptor and relatively robust. The morphologically computed skeleton is an implementation of the medial axis transform. A real- time recognition scheme based on Borgefors' chamfer matching technique is presented which employs multiresolution top-down matching of object medial axis skeletons in a 4:1 pyramid. The proliferation of candidate points at higher resolution is controlled with a clustering scheme. In order to permit small and simply shaped objects to be discriminated from large, complex objects whose skeletons are supersets, we introduce negative match weight scores on the subset of the polygon discriminating the two templates. Results with training sets of noisy and cluttered images are presented. This scheme is shown to be capable of real-time detection and characterization of targets with good reliability in a test scenario.
Image Analysis I
icon_mobile_dropdown
Processing of medical images in compressed format
Robert S. Ledley
Hundreds of thousands of medical images are produced daily in the United States and must be stored. In addition, many of these images must be processed. Due to the large volume, such processing can strain even current computer capabilities. However, these images will be stored in compressed format to conserve memory storage space in the near future. Compression ratios of 50 to 100 to 1 are not uncommon. Because medical images will be available in compressed format and to help accelerate the processing of medical images, we are proposing that the image processing be carried out on the greatly restricted number of compressed parameters rather than on the pixels themselves as is usual. The processing of the substantially smaller number of compressed parameters for an image should be faster in many cases than the processing of images in a pixel by pixel mode. The feasibility of this approach is discussed based upon components of the JPEG image compression standard.
Image Analysis II
icon_mobile_dropdown
Boundary detection and color segmentation in skin tumor images
Fikret Ercal, Madhav Moganti, W. V. Stoecker, et al.
Boundary detection has been recognized as one of the difficult problems in image processing and pattern analysis, in particular in medical imaging applications. There is no unified approach to this problem which has been found to be application dependent. In this paper, we present a simple and yet effective method to find the borders of tumors as an initial step towards the diagnosis of skin tumors from their color images. The method makes use of an adaptive color metric from red, green, and blue (RGB) planes that contain information to discriminate the tumor from the background. Using this suitable coordinate transformation the image is segmented. The tumor portion is then extracted from the segmented image and borders are drawn. Experimental results which verify the effectiveness of our approach are given.
Estimation of sheep and pig body composition by x-ray CT, MRI, and ultrasound imaging
Chris A. Glasbey
Non-invasive imaging techniques have revolutionized diagnostic medicine, and promise to do likewise in animal experimentation and breeding. In this paper, three applications are described in which the objective is to predict body composition.
Application of image restoration and three-dimensional visualization techniques to frog microvessels in-situ loaded with fluorescent indicators
Stamatis N. Pagakis, Fitz-Roy E. Curry, Joyce F. Lenz
In situ experiments on microvessels require image sensors of wide dynamic range due to large variations of the intensity in the scene, and 3D visualization due to the thickness of the preparation. The images require restoration because of the inherent tissue movement, out-of- focus-light contamination, and blur. To resolve the above problems, we developed an imaging system for quantitative imaging based on a 12 bits/pixel cooled CCD camera and a PC based digital imaging system. We applied the optical sectioning technique with image restoration using a modified nearest neighbor algorithm and iterative constrained deconvolution on each of the 2D optical sections. For the 3D visualization of the data, a volume rendering software was used. The data provided 3D images of the distribution of fluorescent indicators in intact microvessels. Optical cross sections were also compared with cross sections of the same microvessels examined in the electron microscope after their luminal surfaces were labeled with a tracer which was both electron dense and fluorescent. This procedure enabled precise identification of the endothelial cells in the microvessel wall as the principal site of accumulation of the fluorescent calcium indicator, fura-2, during microperfusion experiments.
Image Reconstruction I
icon_mobile_dropdown
Scan-rate reduction for tomographic imaging of time-varying distributions
Yoram Bresler, Nathaniel Parker Willis
We formulate the problem of data acquisition as a time-sequential (TS) sampling problem of temporally bandlimited signals, where only one view can be taken at a time, but the time interval between successive views is independent of their angular separation.
Computed alignment of serial sections for 3D reconstructions
Lyndon S. Hibbard, Robert A. Grothe Jr., Tamara L. Arnicar-Sulze
Image alignment is an absolute requirement for creating three-dimensional reconstructions from serial sections. The rotational and translational components of misalignment can be corrected by an iterative correlation procedure, but for images having significant differences, alignment can fail with a likelihood proportional to the extent of the differences. We found that translational correction was much more reliably determined when lowpass filters were applied to the product transforms from which the correlations were calculated. Also, rotational corrections based on polar analyses of the images' autocorrelations instead of the images directly contributed to more accurate alignments. These methods were combined to generate 3-D reconstructions of brain capillaries imaged by transmission electron microscopy.
Mean-field and information-theoretic algorithms for direct segmentation of tomographic images
Ian B. Kerfoot, Yoram Bresler, Andrew S. Belmont
We apply the weak membrane model with optimization by mean field annealing to the direct segmentation of tomographic images. We also introduce models based on the minimum description length principle that include penalties for measurement error, boundary length, regions, and means. Outliers are prevented by upper and lower bound constraints on pixel values. Several models are generalized to three-dimensional images. The superiority of our models to convolution back projection is demonstrated experimentally.
Three-dimensional reconstruction of complex shapes based on the Delaunay triangulation
Jean-Daniel Boissonnat, Bernhard Geiger
We propose a solution to the problem of 3D reconstruction from cross-sections, based on the Delaunay triangulation of object contours. Its properties--especially the close relationship to the medial axis--enable us to do a compact tetrahedrization resulting in a nearest-neighbor connection. The reconstruction of complex shapes is improved by adding vertices on and inside contours.
Image Reconstruction II
icon_mobile_dropdown
Parallel image reconstruction for 3D positron emission tomography from incomplete 2D projection data
Thomas Michael Guerrero, Anthony R. Ricci, Magnus Dahlbom, et al.
The problem of excessive computational time in 3D Positron Emission Tomography (3D PET) reconstruction is defined, and we present an approach for solving this problem through the construction of an inexpensive parallel processing system and the adoption of the FAVOR algorithm. Currently, the 3D reconstruction of the 610 images of a total body procedure would require 80 hours and the 3D reconstruction of the 620 images of a dynamic study would require 110 hours. An inexpensive parallel processing system for 3D PET reconstruction is constructed from the integration of board level products from multiple vendors. The system achieves its computational performance through the use of 6U VME four i860 processor boards, the processor boards from five manufacturers are discussed from our perspective. The new 3D PET reconstruction algorithm FAVOR, FAst VOlume Reconstructor, that promises a substantial speed improvement is adopted. Preliminary results from parallelizing FAVOR are utilized in formulating architectural improvements for this problem. In summary, we are addressing the problem of excessive computational time in 3D PET image reconstruction, through the construction of an inexpensive parallel processing system and the parallelization of a 3D reconstruction algorithm that uses the incomplete data set that is produced by current PET systems.
Genetic connectionism for computed tomographic reconstructions
Salah Darenfed
The reconstruction of a transparent medium from projections acquired from a limited angle of view of formulated as a combinatorial optimization problem. Given a description of a phantom object, the necessary optical pathlength data are computed by numerical quadrature for a small angle of view. The objective is to distribute these object units among all object cells, such that the resulting distribution is the most probable one consistent with the projection data. The reconstruction problem is based on a network comprised of 3 layers. A Genetic based algorithm (GAs) finds the adjustable network coefficients which represent the 2D Fourier components. The object units are coded as a population of strings. New strings are generated through the use of genetic operators: probabilistic selection (reproduction), crossover (recombination of parental solutions) and mutation (exploration in the neighborhood of the current solution). Unlike iterative improvement which proceeds from one point, GAs use a population-by-population search procedures resulting in global search procedure. A hypothetical object field in the form of a mathematical phantom is reconstructed.
Reconstruction from nonuniform data using the energy reduction, the steepest descent, and the contraction mapping method
Yongwan Park
We introduce three iterative methods for reconstructing a band-limited function from its unevenly spaced sampled data. Each iterative method introduces its own varying coefficient which is adaptively determined at each iteration. A varying coefficient for the first algorithm is adaptively determined based on the error energy reduction of the signal. The second method, the method of steepest descent, determines a varying coefficient based on the error energy reduction of the unevenly spaced sampled data. Third, the contraction mapping method determines a varying coefficient based on the distance reduction of the estimated signals. The reconstructed signal from these algorithms converges to the desired signal if the basis functions, {exp(jwtn)}, where {tn} are the unevenly spaced sampling points, form a complete set in the signal subspace of the original signal.
ART with optimal 2D sampling function
A new model is proposed to reconstruct an image from its ray sum using Algebraic Reconstruction Techniques (ART). Assuming that the original image is band limited, an iterative algorithm is developed that evaluates the updated image and reduces the sampling error during each iteration. Depending on a weight factor, estimation of the updated images in each iteration is the contribution of correction to each sample of the ray sum. The weight factor is the fractional area intercepted by the ray sum and the sampling function. To model a 2D image, an optimal sampling function is used where the sampling function is a cylindrical pulse instead of the customary flat-top sample version of a 2D square pulse. Given energy concentration of the pulse, a class of such pulses are generated. A pulse with maximum concentration of energy is used for sampling of the 2D image. By determining the eigenfunctions of a homogeneous Fredholm equation of the second kind with a symmetric kernel such a pulse is generated. Moreover, it is shown that eigenfunctions of the above integral equation are those classes of pulses where the corresponding eigenvalue is the measure of the concentration of an eigenfunction. The desired pulse is an eigenfunction with the maximum eigenvalue.
Panel Discussion: Design of a Common Database for Research in Mammogram Image Analysis
icon_mobile_dropdown
Creating a database of digital mammograms
The introduction of many new programmes of mass screening for breast cancer has once again drawn attention to the problems associated with mammographic interpretation. Radiologists are required to read large numbers of films, most of which are normal, searching for abnormalities that may be small or subtle. High performance standards are essential if such screening programmes are to be effective. There is increasing acceptance of digital imaging amongst the radiological community, with routine use of inherently digital modalities such as CT and MRI, and with the emerging PACS culture which has highlighted the potential benefits of digital archiving and transmission of radiological images. In the screening context, advantages of digital image technology such as relatively compact data storage, flexible image management and manipulation, automatic detection and classification of significant structures, and reproducibility of analyses, may be of particular importance. The advent of digital technology may be rather slower in mammography, where the extremely high standards of image quality and resolution achieved in conventional film imaging cannot readily be matched by current technology. However, it is conceivable that the benefits gained by employing digital image processing and analysis methods might eventually compensate for any loss of image quality and enable more effective detection and analysis of early signs of breast disease.
Design of a common database for research in mammogram image analysis
David R. Dance
The advantages of a common database for rnammographic image analysis are indisputable. I believe that we have reached the stage where the need for such a database is urgent. However, the problem of designing a common database is not trivial and there are many questions which must be answere(l before major effort should be invested in its production.
Computerized mammographic image analysis for reducing false positive rate for biopsy recommendation
Atam P. Dhawan
There are two major issues which would be raised during the panel discussion: 1. There is a need to develop a computerized image analysis system to help physcians making decisions about the biopsy recommendation based on certain mammographic signs. The computerized analysis must be compared with the ROC analysis of best experts and trained to perform as good as human experts or better, if possible. Various types of presentations of computer generated analysis/images from original digital mammograms should be tested to search for the better form of presentation of data to the physician to help making correct decisions about the biopsy recommendation. 2. To make the computer system versatile performing at the best level, it is essential that a large database of critical as well as nominal mammographic cases should be developed and presented to a group of best experts for ROC analysis. Such an analysis will serve as a gold standard for the performance evaluation and comparison of the individual computer systems.
Importance of shared language for performance metrics
W. Philip Kegelmeyer
An important component of the mammogram database prepared by the University of South Florida is the detailed "truth" information which was included. Every pixel in each of the images containing some abnormality was matched by a "truth image" pixel whose value indicated the nature of the underlying tissue, whether entirely normal or any combination of a set of possible abnormalities. It is these truth images which make possible the objective determination of comparative algorithm performance. Inspired by the possibility of such comparative analysis, in these remarks I suggest an enhancement of the database that will permit the sort of evaluation favored by radiologists, and propose some steps that we in the computer mammographic analysis community can take to most effectively compare and communicate our work. In both cases the primary concern is the clear and most effective use of metrics.
Clinical considerations for a mammography database
Carolyn Kimme-Smith
In addition to considerations concerning resolution and contrast representation in a mammography database, there are three other areas that may be overlooked when organizing a representative database. These areas are (1) pathological consensus of the diagnosis, (2) adequate quality of the mammogram, and (3) proportion of nor mal/disease and diseased/malignant in the database.
Design of a common database for research in mammogram image analysis
A common database is crucial for the rapid development of mammographic image analysis. It would enable valid evaluation of different techniques to be made in a meaningful manner. The current situation, each investigator using their own database, makes it impossible to determine which techniques are most useful or even the validity of a given technique.
Database for mammographic image research
Rangaraj M. Rangayyan, Raman B. Paranjape, Liang Shen, et al.
The design of a useful and practical database may be a key step in the evolution of computer-aided enhancement and analysis of mammograms. Thus, careful consideration of issues involved in designing this database is warranted. We address the issues raised in the invitation to the panelists point by point.