Proceedings Volume 2167

Medical Imaging 1994: Image Processing

cover
Proceedings Volume 2167

Medical Imaging 1994: Image Processing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 11 May 1994
Contents: 15 Sessions, 88 Papers, 0 Presentations
Conference: Medical Imaging 1994 1994
Volume Number: 2167

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Segmentation I
  • Segmentation II
  • Scale-Space and Medial Axes
  • Registration
  • Reconstruction
  • Methods for 3D
  • Interpolation, Restoration, and Visualization
  • Tracking, Measurement, and Classification
  • Pattern Recognition Applications
  • Pattern Recognition Methodology
  • Enhancement and Artifact Reduction
  • Enhancement and Artificial Neural Networks
  • Artificial Neural Networks and Applications
  • Poster Session
  • Workshop on Applications of Object-Oriented Modeling
Segmentation I
icon_mobile_dropdown
Segmentation of the brain from 3D MRI using a hierarchical active surface template
John W. Snell, Michael B. Merickel, James M. Ortega, et al.
The accurate segmentation of the brain from three-dimensional medical imagery is important as the basis for visualization, morphometry, surgical planning and intraoperative navigation. The complex and variable nature of brain anatomy makes recognition of the brain boundaries a difficult problem and frustrates segmentation schemes based solely on local image features. We have developed a deformable surface model of the brain as a mechanism for utilizing a priori anatomical knowledge in the segmentation process. The active surface template uses an energy minimization scheme to find a globally consistent surface configuration given a set of potentially ambiguous image features. Solution of the entire 3D problem at once produces superior results to those achieved using a slice by slice approach. We have achieved good results with MR image volumes of both normal and abnormal subjects. Evaluation of the segmentation results has been performed using cadaver studies.
Interacting with image hierarchies for fast and accurate object segmentation
David Volk Beard, David H. Eberly, Bradley M. Hemminger, et al.
Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.
Image segmentation by stochastically relaxing contour fitting
Contour fitting in image segmentation guarantees the closedness of the segment boundary at any stage of the approximation, thus preserving an important global property of the segment. A contour fitting scheme consists of strategies to modify the contour and to optimize the current approximation. For the purpose of contour modification a two-dimensional adaptation of a geometrically deformable model (GDM) is employed. A GDM is a polygon being placed into a structure to be segmented and being deformed until it adequately matches the segment's boundary. Deformation occurs by vertex translation and by introducing new vertices. Sufficient boundary resemblance is achieved by choosing vertex locations in such way that a function is optimized whose different terms describe features attributed to the segment or its boundary. In order to find ideal vertex locations, a stochastic optimisation method is applied which is able to avoid termination of the deformation process in a local optimum (caused, e.g., by noise or artefacts). The deformation terminates after segment boundary and GDM are sufficiently similar. Missing boundary parts between vertices are detected by a path searching technique in a graph whose nodes represent pixel locations. The segmentation algorithm was found to be versatile and robust in the presence of noise being able to segment artificial as well as real image data.
Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images
Brian Johnston, M. Stella Atkins, Kellogg S. Booth
An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.
Fully automatic ventricle detection from cardiac MR images using machine learning
John J. Weng, Ajit Singh, Ming-Yee Chiu
The objective of this work is to develop a technique that is reliable, adaptive, versatile to solve the problem of region detection for a relatively wide class of medical images. Learning is essential in approaching this objective. In order to fully use the properties of the medical images and obtain a high efficiency, we compute a binary visual attention map which contains the region of interest as well as other things. The learning takes place in two stages: (1) learning for automatic selection of threshold values; (2) learning for automatic selection of the region of interest from candidate regions in the attention map. The result from the second stage is evaluated based on a learned cost measure and the outcome is fed back to the first stage when necessary. This feedback enhances the reliability of the entire system. Experiments have been conducted to approximately locate the endocardium boundaries of the left and right ventricles from gradient-echo MR images.
Segmentation II
icon_mobile_dropdown
Ridge flow models for image segmentation
David H. Eberly, Stephen M. Pizer
In this paper we introduce a new algorithm for segmentation of medical images of any dimension. The segmentation is based on geometric methods and multiscale analysis. A sequence of increasingly blurred images is created by Gaussian blurring. Each blurred image is segmented by locating its ridges, decomposing the ridges into curvilinear segments and assigning a unique label to each, and constructing a region for each ridge segment based on a flow model which uses vector fields naturally associated with the ridge finding. The regions from the initial image are leaf nodes in a tree. The regions from the blurred images are interior nodes of the tree. Arcs of the tree are constructed based on how regions at one scale merge via blurring into regions at the next scale. Objects in the image are represented by unions and differences of subtrees of the full tree. The tree is used as input to a visualization program which allows the user to interactively explore the hierarchy and define objects. Some results are provided for a 3D magnetic resonance image of a head.
Automatic segmentation of MR brain images
Nigel John, Xiaohong Li, Akmal Younis, et al.
An automatic image segmentation for MR brain images based on the gray level characteristics of the images is developed. The method analyses a sequence of MR brain images to provide region information as well as boundary data for classification and eventual creation of 3D models. The system incorporates global information from the image set through an analysis of the statistics of the cooccurrence matrices. Local consistency is then applied with the use of a relaxation algorithm on individual images. The cooccurrence matrices provide conditional probabilities for the classification of pixels into specific regions or boundaries based on the matrix distribution. A constrained stochastic relaxation is then used to refine the probabilistic labels using local image information. Results of the technique are presented for MR brain images.
Three-dimensional deformable model for segmentation and tracking of anisotropic cine cardiac MR images
Alok Gupta, Tom O'Donnell, Ajit Singh
MR imaging is increasingly being used as a method for analyzing and diagnosing cardiac function. Segmentation of heart chambers facilitates volume computation, as well as ventricular motion analysis. Successful techniques have been developed for segmentation of individual 2D slices. However 2D models limit the description of a 3D phenomenon to two dimensions and use only 2D constraints. The resulting model lacks interslice coherency, making interslice interpolation necessary. In addition, the model is more susceptible to corruption due to noise local to one or more slices. We present work towards an approach to segmenting cine MR images using a 3D deformable model with rigid and nonrigid components. Past approaches have used models without rigid components or used isotropic CT data. Our model adaptively subdivides the mesh in response to the forces extracted from image data. Additionally, the local mesh of the model encodes surface orientation to align the model with the desired edge directions, a crucial constraint for distinguishing close anatomical structures. The modified subdivision algorithm preserves orientation of the elements by vertex ordering. We present results of segmenting two multi-slice cardiac MR image series with interslice resolutions of 8 and 4 mm/slice, and intraslice resolution of 1mm/pixel. We also include work in progress on tracking multislice, multiphase cine cardiac MR sequences with 4mm interslice, and 1mm intraslice resolution.
Image segmentation applied to CT examination of lymphangioleiomyomatosis
Jason J. Everhart, T. Michael Cannon, John D. Newell Jr., et al.
The purpose of this study is to use modern image segmentation techniques to quantitate cyst area and number within a complete CT examination of the lungs. Lymphangioleiomyomatosis (LAM) was chosen because this disease produces many well defined thin- walled cysts of varying sizes throughout the lungs that provide a good test for 2D image segmentation techniques, which are used to separate LAM cysts from the normal lung tissue. Quantitative measures of the lung, such as cyst area versus frequency, are then automatically extracted. Three women with LAM were examined using CT slices obtained at 20 mm intervals, with 1 to 1.5 mm collimation, and a pixel size of 0.4 - 0.5 mm. Our segmentation algorithm operates in several stages. First, masks for each lung are automatically generated, thus allowing only lung pixels to be considered for the cyst segmentation. Next, we threshold the data under the masks at a level of -900 Hounsfield units. The threshold segments LAM cysts from normal lung tissue and other structures, such as pulmonary veins and arteries. In order to determine the size of individual cysts, we grow all regions having brightness values lower than the threshold within the masked regions. These regions, which correspond to cysts, are then sorted by size, and a cyst histogram for each patient is computed.
Pinta: a system for visualizing anatomical structures of the brain from MR imaging
Bahram Parvin, William E. Johnston
Pinta is a system for segmentation and visualization of anatomical structures obtained from serial sections reconstructed from Magnetic Resonance Imaging. The system approaches the segmentation problem by assigning each volumetric region to an anatomical structure. This is accomplished by satisfying constraints at the pixel level, slice level, and volumetric level. Each slice is represented by an attributed graph, where nodes correspond to regions and links correspond to the relations between regions. Next, the slice level attributed graphs are coerced to form a volumetric attributed graph, where volumetric consistency can be verified. The main novelty of our approach is in the use of the volumetric graph to ensure consistency from symbolic representations obtained from individual slices. In this fashion, the system allows errors to be made at the slice level, yet removes them when the volumetric consistency can not be verified. Once the segmentation is complete, surfaces of the 3D brain structures can be constructed and visualized. We present results obtained from real data and examine the performance of our system.
Scale-Space and Medial Axes
icon_mobile_dropdown
Robust object representation through object-relevant use of scale
Bryan S. Morse, Stephen M. Pizer, Daniel S. Fritsch
In previously published papers we have presented an object representation known as a core that represents an object at measurement scales (tolerances) relative to the local size of the object. Such object-relevant scale allows one to be more sensitive to such detail (and, of course, the effects of noise, blurring, and other image degradation) for smaller objects while being less sensitive to such detail (and image degradation) for larger objects. This produces a more robust mechanism that is able to trade off between sensitivity to noise and loss of detail by considering the properties of the object involved. This paper, after briefly reviewing the definition and computation of cores, studies this relationship between noise and object size and shows that the algorithms for computing cores do indeed produce more stable results for larger objects by automatically selecting correspondingly larger, less noise-sensitive scales.
Scale-space and boundary detection in ultrasonic imaging using nonlinear signal-adaptive anisotropic diffusion
Erik N. Steen, Bjoern Olstad
In this paper we develop a strategy for scale-space filtering and boundary detection in medical ultrasonic imaging. The strategy integrates a signal model for displayed ultrasonic images with the nonlinear anisotropic diffusion. The usefulness of the strategy is demonstrated for applications in volume rendering and automatic contour detection. The discrete implementation of anisotropic diffusion is based on a minimal nonlinear basis filter which is iterated on the input image. The filtering scheme involves selection of a threshold parameter which defines the overall noise level and the magnitude of gradients to be preserved. In displayed ultrasonic images the speckle noise is assumed to be signal dependent, and we have therefore developed a scheme which adaptively adjusts the threshold parameter as a function of the local signal level. The anisotropic diffusion process tends to produce artificially sharp edges and artificial boundary corners. Another modification has therefore been made to avoid edge-enhancement by leaving significant monotone sections unaltered. The proposed filtering strategy is evaluated both for synthetic images and real ultrasonic images.
Cores for image registration
Daniel S. Fritsch, Stephen M. Pizer, Edward L. Chaney, et al.
Cores provide a means for describing fundamental properties of objects in gray-scale images including object position and width, and object-subfigure relationships. In this paper, we demonstrate several methods for registering 2D and 3D gray-scale medical images using object information summarized by the core.
Object-based interpolation via cores
Derek T. Puff, David H. Eberly, Stephen M. Pizer
We propose an object-based interpolation that utilizes the core, a multiscale representation of object shape, as the basis for determining an interpolated object's position and intensities. The core calculations are made directly from image intensities, with no intermediate location of object boundaries. The core of the interpolated object is first determined by an interpolation of the cores of the corresponding objects. The intensity at each of the positions in the object specified by the interpolated core is then determined by interpolating intensities from the equivalent positions in the corresponding objects; positions with equivalent distances along and from the core are chosen via the algorithm described in the paper. This object-based geography has produced promising results for simple, single-core interobject interpolations, and research continues in determining an interpolation method for multifigure objects as well as the background positions in realistic medical images.
Core-based boundary claiming
Stephen M. Pizer, Shobha Murthy, David Chen
The core (defined in the accompanying paper by Morse) provides a means for characterizing the middle/width behavior of a figure, that is, an object or component thereof, directly from the image intensities and in a way insensitive to detail. The figures in question are either complete objects, contained subobjects, object protrusions, or object intrusions. The core provides the ability to claim regions of the image as including either the boundary information of an object or its protrusion or intrusion cores. The angulation in scale space, spatial position, and scale of a figure's core allows one to move from the core to a boundary at the scale of the figure. The core of protrusion and intrusion subfigures of the figure in question will intersect this boundary at the scale of the core. Moreover, if each point on the boundary at the scale of the core is blurred in proportion to the scale of the corresponding point on the core, a collar is formed within which the boundary of the figure can be found. We show how to find the collar and how stably to find the boundary, given the collar, even in noisy or blurred objects. This ability leads to an accurate, robust, automatic method of object area computation, and the generalization of this approach to 3D also provides the basis for efficient surface rendering and volume rendering.
Registration
icon_mobile_dropdown
Automatic registration of 3D images of the brain based on fuzzy objects
Andre M. F. Collignon, Dirk Vandermeulen, Paul Suetens, et al.
Multimodal fuzzy voxel labeling is presented as the basis for a new image registration criterion. The corresponding registration system's architecture performs an iterative calculation of the labeling and the registration process simultaneously, while most other registration systems perform segmentation and iterative estimation of registration parameters sequentially. It will be argued that its application leads to more automated and more accurate registration solutions than does e.g. the use of typical surface based registration systems. In order to support the arguments raised we have performed a case study using both 2D MR software phantom image, and 2D and 3D MR/CT image data. In this case study we looked at the behaviour of maximum likelihood voxel labeling as the simplest instantiation of a fuzzy voxel labeling algorithm. However, the architecture is open to integration of more general multimodal fuzzy voxel labeling algorithms.
Approaches to registration using 3D surfaces
Torre D. Zuk, M. Stella Atkins, Kellogg S. Booth
This paper describes current iterative surface matching methods for registration, and our new extensions. Surface matching methods use two segmented surfaces as features (one dynamic and one static) and iteratively search parameter space for an optimal correlation. To compare the surfaces we use an anisotropic Euclidean chamfer distance transform, based on the static surface. This type of DT was analyzed to quantify the errors associated with it. Hierarchical levels are attained by sampling the dynamic surface at various rates. In using the reduced amount of data provided by the surface segmentation each hierarchical level is formed quickly and easily and only a single distance transform is needed, thus increasing efficiency. Our registrations were performed in a data-flow environment created for multipurpose image processing. The new modifications were tested on a large number of simulations, over a wide range of rigid body transformations and distortions. Multimodality, and multipatient registration tests were also completed. A thorough examination of these modifications in conjunction with various minimization methods was then performed. Our new approaches provide accuracy and robustness, while requiring less time and effort than conventional methods.
Automatic registration of temporal image pairs for digital subtraction angiography
Greg S. Cox, Gerhard de Jager
Temporal Digital Subtraction Angiography (DSA) is used to visualize blood vessels in x-ray images. A DSA image pair consists of the mask image, which is a digitized x-ray taken before a contrast medium is injected into the bloodstream, and the live image, which is taken once the contrast medium has traversed the circulatory system and reached the blood vessels of interest. The mask image is then subtracted from the live image and ideally only the contrast enhanced blood vessels should remain. DSA has two main limitations. Firstly, gross patient motion and physiological events occur in the time that elapses between x-rays. Secondly, there are local and global differences in the mean gray-level at corresponding points in the live and mask images, excluding the variations introduced by the contrast media. To solve the motion problem, we take the approach of matching regions around control points in the live image in a search area around the approximately corresponding points in the mask image. In this way a motion vector field that describes the spatial offset to the best match position in the mask image (with subpixel accuracy) is constructed. The problem of mean gray- level disparity between the live and mask images is to a large extent overcome by the use of a match measure that is invariant to overall additive gray-level differences. Incorrect mismatches caused by the contrast media are avoided by using multiple subtemplates in the matching process. The subtemplate method also allows the estimation of mean gray-level disparity between the mask and live images. The smoothed motion vector field and mean gray-level disparity estimates are used to perform an improved subtraction of the mask from the live image with a reduction in the artifacts that are a result of normal subtraction. Efficient best match search techniques are used to reduce the computational cost of the algorithm, at the expense of some difference image quality. Results are provided for simulated and actual DSA image pairs.
Effect of geometrical distortion correction in MR on image registration accuracy
Calvin R. Maurer Jr., Georges B. Aboutanos, Benoit M. Dawant, et al.
In this paper we investigate the effect of geometrical distortion correction in magnetic resonance (MR) images on the accuracy of the registration of x-ray computed tomography (CT) and MR head images for a fiducial marker (extrinsic point) method and a surface matching technique. We used CT and T2-weighted MR volume images acquired from seven patients who underwent craniotomies in a stereotactic neurosurgical clinical trial. Each patient had four external markers attached to transcutaneous post screwed into the outer table of the skull. We define registration error as the distance between corresponding marker positions after registration and transformation. The accuracy of the fiducial marker method was determined by using each combination of three markers to estimate the transformation and the remaining marker to calculate registration error. Surface-based registration was accomplished by fitting MR contours corresponding to the CSF-dura interface to CT contours derived from the inner surface of the skull. Correction of geometrical distortion in MR images significantly reduced the registration error of both point-based and surface-based registration.
Automatic technique for localizing externally attached markers in MR and CT volume images of the head
Matthew Yang Wang, J. Michael Fitzpatrick, Calvin R. Maurer Jr., et al.
An image processing technique is presented here for finding centroids of cylindrical fiducial markers attached externally to the human head in CT and MR volume images. The centroids can be used for image registration. The technique, which is fast, automatic, and knowledge-based, has two major steps. First, it searches the whole image volume to find all markerlike objects and gets a position inside each object. We call this position a 'seed' point, and we call the object a candidate marker. Second, it selects the voxels surrounding the seed voxel as marker or non-marker voxels using knowledge-based rules and provides an intensity weighted centroid for each true marker. We call this final centroid the 'fiducial' point of the marker. The technique has been developed on forty-two scans of six patients -- one CT and six MR scans per patient. There were four markers attached to each patient for a total of 168 marker images. On these images the technique exhibits no false positives or false negatives for CT. For MR the false positive rate and the false negative rate are both 1.4%. To evaluate the accuracy of the fiducial points MR-CT registration was performed using geometrical correction for the MR images. The fiducial registration accuracies averaged 0.4 mm and were better that 0.6 mm on each of the eighteen image pairs.
Reconstruction
icon_mobile_dropdown
Reducing the computational load of iterative SPECT reconstruction methods by preprocessing the projection data to compensate for nonstationary resolution and attenuation
Stephen J. Glick, Bill C. Penney, Michael A. King, et al.
By accurately modeling the physics of photon transport into the projection and backprojection operations, iterative SPECT reconstruction methods can reduce the degrading effects of scatter, attenuation and the non-stationary spatial resolution of the camera. Unfortunately, iterative reconstruction methods have required very long computation times, predominately due to the complexity involved in modeling these degrading effects into the projection and backprojection operations. In this study, we describe an approach which allows SPECT iterative reconstruction algorithms to be implemented with a reduction in the number of computations needed. The idea is to pre-process the measured projection data to compensate for scatter and attenuation, as well as to transform the projection data to those which would have been obtained with a stationary system resolution. Results of simulation studies indicate that preprocessing the measured projection data reduces the number of computations needed to perform the projection and backprojection operations, and yields reconstructions which differ minimally from those obtained using the slower standard iterative approach of modeling both photon attenuation and nonstationary blurring in the projection and backprojection steps.
Reconstruction of MR spectroscopic images using finite elements and spatial domain priors
Ernest M. Stokely, William B. Gunter, Donald B. Twieg
This paper describes a method for reconstructing images in magnetic resonance spectroscopic imaging (MRSI) using finite element methods and incorporating a priori information into the image reconstruction using a model. The reconstructed image is modeled as a projection of the desired metabolic intensity function onto a set of basis functions. For a general set of basis functions that span the reconstruction space, this problem is shown to result in a set of linear equations. For non- orthogonal basis functions, a singular value decomposition (SVD) technique can be used to obtain a least-squares estimate of the unknown coefficients. Polynomial basis functions with a large rectangular support region were tested and shown to lack the local control necessary to sufficiently resolve some important clinical features of interest (e.g., transmural myocardial infarction). Bilinear finite elements were selected for this problem because they are a basis set with very local support. Various sized finite elements were tested with simulated and phantom myocardium data similar to those that might be obtained from a gated phosphocreatine MRSI patient study. The conclusions of this investigation were: a) finite elements can give the desired local control to resolve clinically relevant lesions such as (simulated) transmural myocardial infarction, b) finite elements are robust in the presence of k-space additive Gaussian noise, and c)editing of the singular values was shown to be important to achieve optimum results. Remaining difficulties with the method include (a) O(N3) SVD computational complexity as the finite elements are made smaller, and (b) 'blockiness' in the reconstructed image due to the regular rectangular nature of elements.
Adaptive edge-preserving regularization for PET image reconstruction
Ming Fang, Chien-Min Kao, Ajit Singh
We describe an adaptive regularization scheme and show how to incorporate it into either the Algebraic Reconstruction Technique (ART) or Maximum Likelihood-Expectation Maximization (ML-EM) based algorithms for reconstruction of Positron Emission Tomography (PET) images. We demonstrate through qualitative and quantitative experiments that the adaptive regularization technique effectively reduces the noise level in the image, while preserving the fine details of the edge structures in the image. The technique does not introduce any visible artifacts during reconstruction.
Image database to constrain the acquisition and reconstruction of MR images of the human head
Yue Cao, David N. Levin M.D.
A training set of MR images of normal and abnormal heads was used to derive a complete set of orthonormal basis functions which converged to headlike images more rapidly than Fourier basis functions. The new image representation was used to reconstruct MR images of other heads from a relatively small number of phase- encoded signal measurements. The training images also determined exactly which phase-encoded signals should be measured to minimize image reconstruction error. These signals were nonuniformly scattered throughout k-space. Experiments showed that head images reconstructed with the new method had less truncation artifacts than conventional Fourier images, reconstructed from the same number of signals.
Resampling scheme for improving maximum likelihood reconstructions of positron emission tomography images
Kevin J. Coakley
In a Maximum Likelihood approach, reconstructions of positron emissions tomography images are obtained with the iterative Expectation Maximization (EM) algorithm. After too many iterations, the reconstruction becomes too rough. In recent work, the EM algorithm was halted by a cross-validation procedure. However, at this stopping point, reconstructions still exhibited some undesirable roughness. Here, the variability of the reconstruction about its expected value is reduced by a Monte Carlo resampling scheme. For simulated data, reconstructions obtained by resampling were somewhat sharper than reconstructions obtained by a simpler linear filtering method. Real data from a FDG study is also studied. Near the boundaries, the Monte Carlo method yielded a sharper reconstruction.
Methods for 3D
icon_mobile_dropdown
Nonlinear filtering approach to grayscale interpolation of 3D medical images
William E. Higgins, Brian E. Ledell
Three-dimensional images are now common in radiology. A 3D image is formed by stacking a contiguous sequence of two-dimensional cross-sectional images, or slices. Typically, the spacing between known slices is greater than the spacing between known points on a slice. Many visualization and image-analysis tasks, however, require the 3D image to have equal sample spacing in all directions. To meet this requirement, one applies an interpolation technique to the known 3D image to generate a new uniformly sampled 3D image. We propose a nonlinear-filter-based approach to gray-scale interpolation of 3D images. The method, referred to as column-fitting interpolation, is reminiscent of the maximum-homogeneity filter used for image enhancement. The method is typically more effective than traditional gray-scale interpolation techniques.
Quantitative analysis of volume images: electron microscopic tomography of HIV
Ingela Nystroem, Ewert W. Bengtsson, Bo G. Nordin, et al.
Three-dimensional objects should be represented by 3D images. So far, most of the evaluation of images of 3D objects have been done visually, either by looking at slices through the volumes or by looking at 3D graphic representations of the data. In many applications a more quantitative evaluation would be valuable. Our application is the analysis of volume images of the causative agent of the acquired immune deficiency syndrome (AIDS), namely human immunodeficiency virus (HIV), produced by electron microscopic tomography (EMT). A structural analysis of the virus is of importance. The representation of some of the interesting structural features will depend on the orientation and the position of the object relative to the digitization grid. We describe a method of defining orientation and position of objects based on the moment of inertia of the objects in the volume image. In addition to a direct quantification of the 3D object a quantitative description of the convex deficiency may provide valuable information about the geometrical properties. The convex deficiency is the volume object subtracted from its convex hull. We describe an algorithm for creating an enclosing polyhedron approximating the convex hull of an arbitrarily shaped object.
Left ventricle 3D motion from single plane cineangiograms
Jean Meunier, Jacques Lesperance, Michel J. Bertrand
In this paper, we propose a new approach to evaluate the ventricle dynamics in monoplane ventriculography. The approach is divided into two main steps: first, the 2D image plane motion (x,y motions) of the heart is evaluated and next the depth motion (z motion) is estimated. To compute the x,y motions we use two methods: first a radial method in which regional wall motion is assumed to converge toward the image center of the left ventricle and second, a computer vision method named optical flow. These methods are applied to the segmented ventriculograms that are obtained by setting to 0 and 1 the exterior and interior respectively of the ventricle using an edge detection algorithm. From the x,y motions, one can align (register) two consecutive original ventriculograms in a manner to make the ventricle contours meet exactly. This operation is done with a gray level bilinear interpolation technique that actually remove the x and y motion components between the two frames. If one assumes that the contrast medium has a constant concentration and is uniformly distributed in the ventricle then the brightness difference between the aligned ventriculograms is directly related to the z motion. Using a pseudo-color image to display this third component of the ventricle motion, one is able to display the 3D motion of the left ventricle. Results are presented for an ellipsoid model of the ventricle undergoing different contraction behaviors and for a clinical example.
High-resolution anisotropic 3D edge detector for medical images
Shih-Ping Liou, Ajit Singh
Advances in sensor and computer technology are resulting in an increased use of three-dimensional images in medical diagnosis. Three dimensional edge detection provides 3D anatomical representation that aids in planning and executing certain surgical procedures as well as in registering images of different modalities. Existing 3D edge detectors are 3D generalizations of 2D edge detectors. Most have limited resolution in boundary representation (subject to the resolution of imaging modality) and limited ability to directly deal with anisotropic sampling which occurs frequently in medical images. This paper presents a formal formulation of the 3D edge detection problem for anisotropically sampled images. Our approach differs from the previous approaches in the following ways: (1) a 3D edge detector is developed for data on anisotropic grids, (2) a modified version of the marching cube algorithm is used to locate the zero-crossings of the second-order derivatives, and (3) a connected component algorithm is developed for grouping zero- crossing surfaces into a set of disjoint surfaces. We show experimental results on many clinically acquired CT and MR images.
Finite element approach to warping of brain images
James C. Gee, David R. Haynor, Martin Reivich M.D., et al.
A probabilistic approach to the brain image matching problem is proposed in which no assumptions are made about the nature of the intensity relationship between the two brain images. Instead the correspondence between the two intensities is represented by a conditional probability, which is iteratively determined as part of the matching problem. This paper presents the theory and describes its finite element implementation. The results of preliminary experiments indicate that there remain several aspects of the algorithm that require further investigation and refinement.
Interpolation, Restoration, and Visualization
icon_mobile_dropdown
Embedded active surfaces for volume visualization
Ross T. Whitaker, David Chen
We propose a new technique for use in the visualization of sparse, fuzzy, or noisy 3D data. This technique incorporates the methods of deformable or active models that have been developed in 2D computer vision. In this paper we generalize such models to 3D in a manner that is both practical and mathematically elegant, and we thereby avoid many of the problems associated with previous attempts to generalize deformable models. When generalizing to 3D, deformable models have several drawbacks-- including their acute sensitivity to topology, parameterization, and initial conditions--which limit their effectiveness. Many of these problems stem from the underlying parameterization of the model. This paper presents an implicit representation of deformable models. The implicit representation is an embedding of objects as level sets of grayscale functions which serve as templates. The evolution equation associated with the energy minimization process for a model has an analogous partial differential equation which governs the behavior of the corresponding grayscale template. We show that the 'active blobs' associated with the embedding of active models have several useful properties. First, they are topologically flexible. Second, grayscale images represent families of models. Third, when surfaces are embedded as grayscale images, they are described by a natural scale space. This scale space provides the ability to solve these equations in a multi-scale manner. Several 2D examples of technique are presented, as well as some visualization results from 3D ultrasound.
Correlational accumulation as a method for signal restoration
Leonid P. Yaroslavsky, Murray Eden
Signal restoration from multiple copies derived from a single object is investigated for the case in which signal copies are observed mixed with additive signal independent noise and with unknown mutual displacement. The known two stage restoration procedure, registration of the signal realizations by localization of the maximum of the signal copies' cross- correlation functions, followed by averaging of the registered realizations, is analyzed with an emphasis on its properties for very low signal-to-noise ration. The following problems are addressed and solved: optimality of the correlational accumulation in terms of its noise reduction capability; signal distortions which originate from the signal registration errors; noise reduction capability of the correlational accumulation (averaging) as a function of signal-to-noise ratio in the observed signal-plus-noise realizations. A modified correlational accumulation technique with improved capability of restoration of the signal shape is suggested.
Reconstruction and restoration methods in cone beam tomography
Harish P. Hiriyannaiah, Mohan Satyaranjan, K. R. Ramakrishnan
A method for 3D cone beam tomographic reconstruction from limited data has been developed. This method uses a modified form of convolution backprojection and projection onto convex sets for handling the limited data problem. Convex constraints applicable to cone beam projection data have been identified, and their associated projection operators have been used. The algorithm has been tested with simulated data for circular source point trajectory. The method however is independent of source point geometry, and will be useful in any limited view cone beam reconstruction.
Tracking, Measurement, and Classification
icon_mobile_dropdown
Detecting and tracking microvessels in conjunctiva images: an approach based on modeling and fuzzy logic
Carl E. Wick, Murray H. Loew, Joseph Kurantsin-Mills
The conjunctiva is an ideal location to study and measure the morphology of the microcirculation, because access to blood vessels at this site is essentially noninvasive. Our efforts have been directed toward automating the labor-intensive process of collecting morphological information from photographs and video images of the conjunctiva. In previous work we have developed a detailed model of the illumination/reflection processes that result in a film or video image of the bulbar conjunctiva. Using information gained from this model, we have now extended our research toward the development of robust microvessel detection and tracking algorithms. Our modeling has shown that it is possible to extract some relative 3D information about blood vessels from their gray-scale profiles. Images of the conjunctiva, however, also exhibit significant variability in their gray-scale data. We have adapted some fuzzy logic concepts to deal with this problem. These fuzzy logic algorithms have been very effective in detecting blood vessel points in these images, and have also been used to link the segments together into blood vessel tracks.
Detecting and tracking the left and right heart ventricles via dynamic programming
Davi Geiger, Alok Gupta
Computation of ventricular volume and the diagnostic quantities like ejection-fraction ratio, heart output, mass, etc. requires detection of myocardial boundaries. The problem of segmenting an image into separate regions is one of the most significant problems in vision. Terzopoulos et al., have proposed an approach to detect the contour regions of complex shapes, assuming a user selected an initial contour not very far from the desired solution. We propose an optimal dynamic programming (DP) based method to detect contours. It is exact and not iterative. We first consider a list of uncertainty for each point selected by the user, wherein the point is allowed to move. Then, a search window is created from two consecutive lists. We then apply a dynamic programming (DP) algorithm to obtain the optimal contour passing through these lists of uncertainty, optimally utilizing the given information. For tracking, the final contour obtained at one frame is sampled and used as initial points for the next frame. Then, the same DP process is applied. We have demonstrated the algorithms on natural objects in a large spectrum of applications, including interactive segmentation of the regions of interest in medical images.
Computational techniques for determining nonrigid motion of blood from medical images
Amir A. Amimi
In this work, we present results from a new formulation for determining certain class of optical flow fields. The formulation is particularly efficient, as the flow field is either a global 90 degrees rotation applied to the gradient of a scalar function, or is identical to the gradient of a scalar function. The formulation is general: it is applicable whenever the velocity field is incompressible, or irrotational. We are interested in the study of nonrigid motion of incompressible fluids, and as such will restrict most of the discussions to the case of divergence-free velocity fields. Starting from the conservation of mass principle, we derive a motion constraint equation for x-ray projection pictures, a special case of which is shown to be the Horn and Schunck's optical flow constraint. It is shown that if specific criteria are met, in addition to the normal component of the velocity field, the tangential component is recoverable, without the need for smoothness. An algorithm is presented to illustrate this. The techniques are applied to synthetic images, as well as contrast-injected x-ray images of flowing fluid, in a cylindrical phantom.
Semiautomatic brain morphometry from CT images
Fast, accurate, and reproducible volume estimation is vital to the diagnosis, treatment, and evaluation of many medical situations. We present the development and application of a semi-automatic method for estimating volumes of normal and abnormal brain tissues from computed tomography images. This method does not require manual drawing of the tissue boundaries. It is therefore expected to be faster and more reproducible than conventional methods. The steps of the new method are as follows. (1) The intracranial brain volume is segmented from the skull and background using thresholding and morphological operations. (2) The additive noise is suppressed (the image is restored) using a non-linear edge-preserving filter which preserves partial volume information on average. (3) The histogram of the resulting low-noise image is generated and the dominant peak is removed from it using a Gaussian model. (4) Minima and maxima of the resulting histogram are identified and using a minimum error criterion, the brain is segmented into the normal tissues (white matter and gray matter), cerebrospinal fluid, and lesions, if present. (5) Previous steps are repeated for each slice through the brain and the volume of each tissue type is estimated from the results. Details and significance of each step are explained. Experimental results using a simulation, a phantom, and selected clinical cases are presented.
Adaptive detection of microvascular edge in microcirculatory images for auto-tracking measurement of spontaneous vasomotion
Xiaoyou Ying, Yongjian Bao, Rui-juan Xiu, et al.
We developed a dynamic microvascular edge detection method which is based on an adaptive thresholding and multijudgmental criteria. To realize the on-line measurement with video rate, we first set changeable measuring lines which are perpendicular to a microvessel axis and cover the possible edge location at a cross- section of the microvessel as a sampling window. A dynamic threshold, which can frame-by-frame automatically adapt to the change of light intensity in the sampling window, will be generated based on the on-line analysis of light intensity distribution along the measuring lines. The judgment of microvascular edges is based on the pattern characteristics of the light intensity distribution curve in the microvascular edge areas and the possible range of the microvascular diameters. Multiple criteria for the edge detection were set for accurately detecting the edges and skipping the non-edge zones to speed the edge recognizing procedure. To further improve reliability of this edge detection, a dynamic graphic indicator can be generated according to the detected vessel edge location, and simultaneously displayed with the original image. This algorithm has been successfully applied for autotracking measurement of spontaneous vasomotion in microcirculation, even when the microcirculatory image had complex background and low contrast.
Pattern Recognition Applications
icon_mobile_dropdown
Quantifying white matter lesions with MRI using finite mixture density and 2D clustering estimation
William H. Hinson, Howard Donald Gage, Dixon M. Moody, et al.
Research is presented in which white matter lesions are quantified using MRI data on cardiac surgery patients. Various methods of quantification are presented including finite mixture density analysis of various MRI parameters, K-means, and principal components analysis. Pre- and post-operative data sets are studied for each patient to determine the change in lesion load due to surgery. The various methods are compared and the differences are indicated on both registered and unregistered data sets. Agreement among the methods is not good in many instances and at times show an inverse correlation. Images and data showing the gray scale distributions are presented.
Fractal geometry-based classification approach for the recognition of lung cancer cells
Deshen Xia, Wenqing Gao, Hua Li
This paper describes a new fractal geometry based classification approach for the recognition of lung cancer cells, which is used in the health inspection for lung cancers, because cancer cells grow much faster and more irregularly than normal cells do, the shape of the segmented cancer cells is very irregular and considered as a graph without characteristic length. We use Texture Energy Intensity Rn to do fractal preprocessing to segment the cells from the image and to calculate the fractal dimention value for extracting the fractal features, so that we can get the figure characteristics of different cancer cells and normal cells respectively. Fractal geometry gives us a correct description of cancer-cell shapes. Through this method, a good recognition of Adenoma, Squamous, and small cancer cells can be obtained.
Fuzzy cluster validity in magnetic resonance images
Amine M. Bensaid, Lawrence O. Hall, James C. Bezdek, et al.
Individual cluster validation has not received as much attention as partition validation. This paper presents two measures for evaluating individual clusters in a fuzzy partition. They both account for properties of the fuzzy memberships as well as the structure of the data. The first measure is a ratio between compactness and separation of the fuzzy clusters; the second is based on counting a contradiction between properties of the fuzzy memberships and the stucture of the data. These two measures are applied and compared in evaluating fuzzy clusters generated by the fuzzy c-means algorithm for segmentation of magnetic resonance images of the brain.
Pattern classification approach to segmentation of digital chest radiographs and chest CT image slices
Michael F. McNitt-Gray, James W. Sayre, H. K. Huang, et al.
The goal of this research was to develop a segmentation method based on a pattern classification approach. The pattern classification approach consists of classifying each pixel into one of several anatomic classes on the basis of one or more feature values. In this research, three types of locally calculated features are used: gray-level based measures, local difference measures and local texture measures. A feature selection process is performed to determine which features best discriminate between the anatomic classes. Three classifiers are used: a linear discriminant function, a k-nearest neighbor approach and a neural network. Supervised techniques train each classifier to learn the characterstics of the anatomic classes. Each classifier is trained and tested using normal images. The pattern classification approach to image segmentation has shown promise for further development. Locally calculated features are important in classifying pixels, but these alone may not be sufficient. A method for incorporating spatial information into the classification decision appears to improve the results and may be necessary for reliable segmentation. This research also shows that the pattern classification approach may be applied to images from different modalities.
Tissue type detection by block processing
Tianhu Lei, Zuo Zhao, Wilfred Sewchand
A new region detection and segmentation method is presented for performing tissue type classification and quantification. The original image data are transformed to the samples of a sample vector. The covariance matrix of this sample vector and its eigenvalues are computed. These eigenvalues are inputed into the information criterion of minimum description length to determine the region numbers. Then a modified K-mean algorithm and Bayesian classifier are utilized to segment image into the regions. This method does not need image model, considers the spatial correlations among the pixels, and is much faster than the model- based approaches.
Pattern Recognition Methodology
icon_mobile_dropdown
New method for identifying cortical convolutions in MR brain images
Yaorong Ge, J. Michael Fitzpatrick, Jun Bao, et al.
Analysis of brain images often requires accurate localization of cortical convolutions. Although magnetic resonance (MR) brain images offer sufficient resolution for identifying convolutions in theory, the nature of tomographic imaging prevents clear definition of convolutions in individual slices. Existing methods for solving this problem rely on brain atlases created from a small number of individuals. These methods do not usually provide high accuracy because of large biological variations among individuals. We propose to localize convolutions by linking realistic visualizations of the cortical surface with the original image volume itself. We have developed a system so that a user can quickly localize key convolutions in several visualizations of an entire brain surface. Because of the links between the visualizations and the original volume, these convolutions are simultaneously localized in the original image slices. In the process of our development we have also implemented a fast and easy method for visualizing cortical surfaces in MR images and therefore makes our scheme usable in practical applications.
Multiparameter image visualization with self-organizing maps
The effective display of multiparameter medical image data sets is assuming increasing importance as more distinct imaging modalities are becoming available. For medical purposes, one desirable goal is to fuse such data sets into a single most informative gray-scale image without making rigid classification decisions. A visualization technique based on a non-linear projection onto a 1D self-organizing map is described and examples are shown. The SOM visualization technique is fast, theoretically attractive, a useful complement to projection- pursuit or other linear techniques, and may be of particular value in calling attention to specific regions in a multiparameter image where the component images should be examined in detail.
Combining a few diagnostic tests or features
Robert F. Wagner, David G. Brown, Jeanpierre V. Guedon, et al.
There are many current trends toward combining diagnostic tests and features in medical imaging. For this reason we have been exploring the stucture of the finite-training-sample bias and variance that one encounters in pilot or feasibility studies within this paradigm. Here we report on the case of the simple linear Bayesian classifier in a space of a few dimensions (two through fifteen). The results argue for the importance of estimating these effects in clinical studies, perhaps through the use of resampling techniques.
Image kernel of Mimosa medical imaging model
Yves J. Bizais, Florent Aubry, Virginie Chameroy, et al.
The purpose of this paper is i) to explain the need for a generic image model in medical imaging, ii) to describe under which conditions such a model can be built, and iii) to present the image model we have been developing during the last two years in the framework ofthe EurlPacs IMimosaproject ofthe ATM programme ofthe European Communities'. Several organisations are in the process of defining communication standards (in particular DICOM) for medical imaging, as successfully demonstrated during the last RNSA meeting. Such a standard is an absolute necessity for implementing PACS, since it provides a framework to exchange image information produced by multi-vendor acquisition devices. Unfortunately such a standard is not sufficient to build a clinically useful PACS. One must also describe how data are organised in medical imaging, to allow end users (clinicians) to understand image information. This is the aim of the EurlPacs I Mimosaproject. The basic assumption of this work is that there is a common denominator in the way clinicians "understand" medical images even though local particularisms may hide it. Consequently our model aims at describing medical images in a way general enough to allow for a generic description, while providing facilities to describe local characteristics. Our approach makes use of a fairly standard modelling technique : data model using NIAM, fimctional modelling2 and organisational modelling. It turns out that local particularisms can be described at the dynamic level or even at the implementation level which is not considered in the formal model, such that a generic model can be defined. Moreover communication standards such as DICOM2 can be used within our model to describe how image data are actually organised as files to be transferred between PACS nodes. In this regard there is no overlap between the Mimosa model and communication standards. We consider three levels for the data model : an examination context which describes high level objects such as patient folder, request, report, a PACS model which describes the resources (network, acquisition devices, archives, image workstation) involved in image manipulation, and an image kernel which describes images. The examination context essentially contains attributes allowing HIS/RIS to monitor and control medical image information. They constitute most of the information exchanged between PACS and HIS/RIS. The PACS model addresses issues such as network performances, local storage capacity to provide image information in the right place at the right time. The image kernel specifies image attributes able to accurately define how images are acquired, processed, interpreted and used during diagnostic and/or therapeutic processes. It is clear that this model must be generic and modality independent5 to encompass any and every use of medical images, and precise enough to allow for their efficient use (in particular for multidimensional and multimodality data). Consequently this model may seem complex and significantly differs from commonly used image models. However it proved to be able to describe all examples against which it was tested contrary to other models. Because of its apparent complexity and because of its potential power, we think it is worth devoting a paper to its description. In section 2 we explain why such a model is required. In section 3 we describe the core of the model : the "image object" and its various components : Formal aspects, Version, Representation, Logical Files and Copies. In the same section we present two important related concepts : Image generator and Reference position. In section 4 we show how image objects can be grouped to become meaningful at the examination context level.
Enhancement and Artifact Reduction
icon_mobile_dropdown
Convex sets for image synthesis in enhancement and compression
The method of alternating projections onto convex (POCS) sets is used to process images for both compression and enhancement. Convex sets are derived that define certain desirable characteristics of the images for both applications. A new image is produced using POCS which satisfies these characteristics thereby relaxing others. For enhancement, images are produced that display more of the information desired such as adjacent pixel differnces. For compression, relaxing the characteristics not deemed important allows for improved coding efficiency. POCS provides the ability to define the problem piecewise, to apply as many or few constaints as desired, and to easily implement the algorithm by separately deriving and implementing the projection operators.
New method of noise smoothing based on gray level and spatial separation
Brent J. Liu, Keh-Shih Chuang
Medical images produced by x rays as the energy source are subject to contamination by random noise due to the statistical nature of both the x rays and the electomagnetic field. This noise degrades the image quality. There has been a considerable amount of effort devoted to the removal of noise in medical images. The purpose of this project is to introduce a new method that utilizes the natural separation of different populations of pixels with the same gray-level characteristics to smooth image noise efficiently while effectively preserving edges. The assumption is made that pixels inside a small window can be separated into two populations. Only the pixels belonging to the correct population will be used for filtering. Thus smoothing performance is enhanced while maintining the preservation of edges since pixels from the other population are not included in the evaluation. The new filter involves two steps: (1) Pixels are clustered according to their gray-level characteristics. (2) The central pixel is replaced by a weighted averaged value from the population containing the central pixel. The weight is determined by the distance of a given pixel from the central pixel. Preliminary results show effective noise smoothing especially where there is a large uniformity of pixels. Furthermore, there is preservation of edges.
Clinical tool for enhancement of portal images
Murray H. Loew, Julian G. Rosenman, Jun Chen
Demonstrably effective enhancement of portal films is now available to practicing radiation therapists. The combination of a straightforward user interface and an algorithm that runs in a clinically-reasonable time improves our previously-reported technique and makes it accessible to clinicians. Radiation portal images remain the most important mechanism for assuring geometric accuracy of radiation therapy delivery. The high-energy x-ray beams, however, produce films that are intrinsically of low contrast and so they can vary widely in quality, making consistent interpretation difficult. Our current work improves image quality further by adding the option for a median-filtering operation at the output of SHAHE. This step removes much of the local noise that is sometimes introduced despite the contrast- limiting step. Using a new graphical-user-interface-builder, we have built a portal image enhancement system designed to be used by the non-expert. Four levels of enhancement are available: high and low contrast, with and without the median filter. Adjustment of all seven SHAHE parameters is possible for the expert user, but is discouraged for routine use because many of the regions of 'enhancement space': have not been explored for accuracy. The user of the system is presented with a display that allows the selection of an input image and the level of enhancement. Typical computation times (for a 2048 x 2048 image) on a Sun Sparc 400 computer average approximately 10 minutes. Clinical portal imaging -- especially as digital capture is introduced -- should be able to benefit measurably from the use of the methods described here.
Multiscale image contrast amplification (MUSICA)
Pieter Vuylsteke, Emile P. Schoeters
This article presents a novel approach to the problem of detail contrast enhancement, based on multiresolution representation of the original image. The image is decomposed into a weighted sum of smooth, localized, 2D basis functions at multiple scales. Each transform coefficient represents the amount of local detail at some specific scale and at a specific position in the image. Detail contrast is enhanced by non-linear amplification of the transform coefficients. An inverse transform is then applied to the modified coefficients. This yields a uniformly contrast- enhanced image without artefacts. The MUSICA-algorithm is being applied routinely to computed radiography images of chest, skull, spine, shoulder, pelvis, extremities, and abdomen examinations, with excellent acceptance. It is useful for a wide range of applications in the medical, graphical, and industrial area.
New algorithm for adaptive contrast enhancement based on human visual properties for medical imaging applications
Tinglan Ji, Malur K. Sundareshan, Hans Roehrig
Existing methods for image contrast enhancement focus mainly on the properties of the image to be processed while excluding any consideration of the observer characteristics. In several application, particularly in the medical imaging area, effective contrast enhancement for diagnostic purposes can be achieved by including certain basic human visual properties. In this paper we shall present a novel adaptive algorithm that tailors the required amount of contrast enhancement based on the local contrast of the image and the observer's Just-Noticeable- Difference (JND). This algorithm always produces adequate contrast in the output image, and results in almost no ringing artifacts even around sharp transition regions, which is often seen in images processed by conventional contrast enhancement techniques. By separating smooth and detail areas of an image and considering the dependence of noise visibility on the spatial activity of the image, the algorithm treats them differently and thus avoids excessive enhancement of noise, which is another common problem for many existing contrast enhancement techniques. The present JND-Guided Adaptive Contrast Enhancement (JGACE) technique is very general and can be applied to a variety of images. In particular, it offers considerable benefits in digital radiography applications where the objective is to increase the diagnostic utility of images. A detailed performance evaluation together with a comparison with the existing techniques is given to demonstrate the strong features of JGACE.
Enhancement and Artificial Neural Networks
icon_mobile_dropdown
Importance of ray pathlengths when measuring objects in maximum intensity projection images
Steven Schreiner, Benoit M. Dawant, Cynthia B. Paschal, et al.
It is important to understand any process that affects medical data. Once the data has changed from its original form, one must consider the possiblity that the information contained in the data has also changed. In general, false negative and false positive diagnoses caused by this post-processing must be minimized. Medical imaging is one area in which post-processing is commonly performed, but often there is little or no discussion of how these algorithms affect the data. This study uncovers some interesting properties of maximum intensity projection (MIP) algorithms that are commonly used in the post-processing of magnetic resonance angiographic data. Of particular interst to clinicians is the appearance of the width of vessels and the extent of malformations such as aneurysms. This study will show how MIP algorithms interact with the shape of the object being projected. MIPs can make objects appear thinner in the projection than in the original data set and also alter the shape of the profile of the object seen in the original data. These effects have consequences for width measuring algorithms which will be discussed. In addition to the computer generated model, a static MR phantom was imaged. The phantom verified that Equation (1) predicts the projection plane intensities well (r=.98) for a constant-intensity object.
Effect of maximum likelihood-median processing on the contrast-to-noise ratio in digital chest radiography
Alan H. Baydush, Carey E. Floyd Jr.
Previously, we have shown that Maximum Likelihood Expectation Maximization (MLEM) can be used to effectively estimate a scatter reduced image in digital chest radiography; however, the MLEM technique is known to increase image noise. A MLEM-median (ML- median) technique has been implemented that follows each MLEM iteration with a 3x3 median filter for noise reduction. Subjective image quality of the scatter reduced ML-median processed image was improved over the original measured image with enhanced visualization of the retrocardiac region and the mediastinum. In both the mediastinum and the lung region, contrast was significantly improved, while percent noise (noise) was only slightly increased over that of the measured image. The contrast-to-percent noise ratio (CNF) in these regions was increased 130 percent, on average. ML-median processing was compared to Bayesian Image Estimation that incorporated a Gibb's prior. CNR for the ML-median technique was increased 16.5 percent and 49.7 percent in the lung and mediastinum regions, respectively, over that of the Bayesian technique. The effect of ML-median processing on resolution was also examined.
Three-dimensional lesion detection in SPECT using artificial neural networks
Georgia D. Tourassi, Carey E. Floyd Jr.
An artificial neural network was developed to perform lesion detection in single photon emission tomography using information from three consecutive slices. The network had a three-layer, feed-forward architecture. For the present study, the detection task was restricted to deciding the presence or absence of a lesion at a given location in the middle slice considering also the two adjacent slices. An 11x11 pixel neighborhood was extracted around the potential location of the lesion in every slice. The total 363 pixel values represented the input information given to the network. Then, the network was trained using the backpropagation algorithm to output 1 if a lesion was present in the middle slice and 0 if not. The diagnostic performance of the 3D detection network was evaluated for various noise levels and lesion sizes. In addition, the 3D detection network was compared to a 2D network trained to perform the same detection task based only on the middle slice. In all cases, the 3D network significantly outperformed the 2D network. This study shows the potential of feedforward, backpropagaion networks to view multiple images simultaneously when performing a lesion detection task.
Spatially varying scatter compensation for chest radiographs using a hybrid Madaline artificial neural network
Joseph Y. Lo, Alan H. Baydush, Carey E. Floyd Jr.
We developed a hybrid artificial neural network for scatter compensation in digital portable chest radiographs. The network inputs an image region of interest (ROI), and outputs the scatter estimate at the ROI's center. We segmented each image into four regions by relative detected exposure, then trained a separate Adaline (adaptive linear element) or adaptive filter for each region. We produced a spatially varying hybrid Madaline (mulitple Adaline) by combining outputs from weight matrices of different sizes trained for different durations. The network was trained with 20 patient or 1280 examples, then evaluated with another 5 patients or 320 examples. Scatter estimation errors were not very different, ranging from the Adaline's 6.9 percent to the hybrid Madaline's 5.5 percent. Primary errors (more relevant to quantitative radiography techniques like dual energy imaging) were 43 percent for the Adaline, reduced to 27 percent for the Madaline, and further reduced to 19 percent for the hybrid Madaline. The trained weight matrices, which act like convolution filters, resembled the shape and magnitude of scatter point spread functions. All networks outperformed conventional convolution-subraction techniques using analytical kernels. With its spatially varying neural network model, the hybrid Madaline provided the most accurate and robust estimation of scatter and primary exposures.
Artificial Neural Networks and Applications
icon_mobile_dropdown
Comparison of neural network, human, and suboptimal Bayesian performance on a constrained reconstruction task
Neural networks were applied to the task of detecting simulated low contrast lesions in limited-view reconstruction tomography images. Results were compared with those for theoretically derived machine observers and for human observers. Preliminary results indicated improved neural network performance for the small data set on which human observer data had been obtained, but further results for a larger data set give performance generally inferior to the best machine observer.
Artificial neural network for pulmonary nodule detection: preliminary human observer comparison
Seema Garg, Carey E. Floyd Jr., Carl E. Ravin
A single-layer artificial neural network was developed to detect synthetic pulmonary nodules of approximately the same size in patient chest radiographs. The identical detection task was given to human observers with varying degrees of radiological training (board-certified radiologists, residents, and a medical student). The network and human observers were presented five patient radiographs each with 12 marked locations. The human observers estimated the probability that a nodule was present at each of these locations. The network evaluated the same locations for the presence of a nodule. Using Reciever Operating Characteristic (ROC) analysis, we found that the performance of the artificial neural network was comparable to that of human observer. The areas under the curve for the neural network and human observers were 0.93 and 0.92, repectively.
Classification of microcalcifications in radiographs of pathological specimen for the diagnosis of breast cancer
A convolution neural network (CNN) was employed to classify benign and malignant microcalcifications in the radiographs of pathological specimen. The input signals to the CNN were the pixel values of image blocks centered on each of the suspected microcalcifications. The CNN has been shown to be capable of recognizing different image patterns. Digital images were acquired by digitizing radiographs at a high resolution of 21 micrometers X 21 micrometers . Eighty regions of interest (ROIs) selected from digitized radiographs of pathological specimen were used for the training and testing of the neural network system. The performance of the neural network system was analyzed using the ROC analysis.
Effects of finite sample size and correlated/noisy input features on neural network pattern classification
David G. Brown, Alexander C. Schneider, Mary S. Pastel, et al.
In many areas of practical interest, for example medical decision making problems, input data for training and testing neural networks are severely limited in number, are corrupted by noise, and may be highly correlated. In this study we examine these factors by investigating network performance on a simulated Gaussian data set with known first and second order statistics. Following the work of Wagner et al. for statistical (likelihood- ratio) classifiers, we study how the addition of noisy/correlated features affects the performance of neural network classifiers. Results are similar to that of the previous study, demonstrating that for small data sets, additional noisy/correlated features in fact degrade network performance. In addition, the use of sophisticated statistical techniques including the jackknife, Fukunaga-Hayes group jackknife, and bootstrap to estimate performance variation and remove small-sample bias are examined and found to offer significant advantages.
Poster Session
icon_mobile_dropdown
Convolution neural-network-based detection of lung structures
Chest radiography is one of the most primary and widely used techniques in diagnostic imaging. Nowadays with the advent of digital radiology, the digital medical image processing techniques for digital chest radiographs have attracted considerable attention, and several studies on the computer-aided diagnosis (CADx) as well as on the conventional image processing techniques for chest radiographs have been reported. In the automatic diagnostic process for chest radiographs, it is important to outline the areas of the lungs, the heart, and the diaphragm. This is because the original chest radiograph is composed of important anatomic structures and, without knowing exact positions of the organs, the automatic diagnosis may result in unexpected detections. The automatic extraction of an anatomical structure from digital chest radiographs can be a useful tool for (1) the evaluation of heart size, (2) automatic detection of interstitial lung diseases, (3) automatic detection of lung nodules, and (4) data compression, etc. Based on the clearly defined boundaries of heart area, rib spaces, rib positions, and rib cage extracted, one should be able to use this information to facilitate the tasks of the CADx on chest radiographs. In this paper, we present an automatic scheme for the detection of lung field from chest radiographs by using a shift-invariant convolution neural network. A novel algorithm for smoothing boundaries of lungs is also presented.
Dual-energy computed radiography: improvements in processing
David L. Ergun, Walter W. Peppler, James T. Dobbins III, et al.
We have reported on a single-exposure dual-energy system based on computed radiography (CR) technology. In a clinical study conducted over a two year period, the dual-energy system proved to be highly successful in improving the detection (p=0.0005) and characterization (p=0.005) of pulmonary nodules when compared to conventional screen-film radiography. The basic components of our dual-energy detector system include source filtration with gadolinium to produce a bi-modal x-ray spectrum and a cassette containing four CR imaging plates. The front and back plates record the low-energy and high-energy images, respectively, and the middle two plates serve as an intermediate filter. Since our initial report, a number of improvements have been made to make the system more practical. An automatic registration algorithm based on image features has been developed to align the front and back image plates. There have been two improvements in scatter correction: a simple correction is now made to account for scatter within the multi-plate detector; and a correction algorithm is applied to account for scatter variations between patients. An improved basis material decomposition (BMD) algorithm has been developed to facilitate automatic operation of the algorithm. Finally, two new noise suppression techniques are under investigation: one adjusts the noise filtering parameters depending on the strength of edge signals in the detected image in order to greatly reduce quantum mottle while minimizing the introduction of artifacts; a second routine uses knowledge of the region of valid low-energy and high-energy image data to suppress noise with minimal introduction of artifacts. This paper is a synthesis of recent work aimed at improving the performance of dual-energy CR conducted at three institutions: Philips Medical Systems, the University of Wisconsin, and Duke University.
Comparison of musculoskeletal images from the AGFA and Fuji digital imaging systems
Martha C. Nelson M.D., Matthew T. Freedman M.D., Einar V. Pe, et al.
The AGFA Diagnostic Center and the Fuji Computed Radiography digital imaging machines differ in image processing software and as a result image appearance differs. The fundamental methods of image processing of digital musculoskeletal radiographs are described in a companion poster with demonstrations of the effects of the setting of each factor on the final image appearance. In this poster, we will demonstrate the differences between the systems through a discussion of interesting cases with optimized images. The two digital systems are competitive with each having advantages and disadvantages compared to the other. When compared to conventional screen film systems, our group considers the digital images that can be obtained on both machines to be superior to conventional images.
Comparison of conventional magnification mammography using film screen mammography and electronic magnification
The detection of breast microcalcifications is an important finding that can indicate the presence of breast cancer. Once detected, the radiologist attempts to classify the microcalcifications into patterns that are associated with benign or malignant processes. If they cannot be so classified they are considered indeterminate calcifications. Those characteristics of the microcalcifications that are used in this important distinction are the shape and number of calcifications and the presence or absence of any change from a prior mammogram. (1-4) Geometric magnification views are often obtained to provide better visualization of the microcalcifications. Although conventional screen film mammography has sufficient high contrast resolution (20 line pairs per mm (lplmm)) to demonstrate these findings and the type of hand held magnifying lens often used by Radiologists has sufficient power to demonstrate more than 30 lp/mm, in practice, the region containing the microcalcifications is often in a low contrast region of the image and the hand held magnifying lens does not suffice. We produce electronic magnification views of the breast by taking an existing screen film mammogram and digitizing it with a small pixel size. (5) The image is then displayed on a monitor or printed on a laser printer that has a larger pixel size, thereby resulting in magnification. The visibility of the calcifications is enhanced by changing the window level and window width to increase optical density or to decrease luminance and to increase contrast. The effect of these two actions is to correct in part for the low contrast in the original image. Three potential benefits results from electronic magnification: 1. The current film on which calcifications are seen can be magnified and evaluated without the patient having to return for a geometric magnification view. 2. The prior mammogram (if of suboptimal quality) can be digitized and re-displayed to better determine whether the microcalcifications have changed. 3. The digitized image may allow the reclassification of calcifications from benign appearance to a pattern suggestive of cancer, thereby resulting in an earlier decision to biopsy.
Feature extraction and visualization methods based on image class comparison
Vassili A. Kovalev
Two different methods are proposed for feature extraction and visualization. The first method is based on automatical searching features for image class on the training set. Special multidimensional co-occurrence matrices are used as the detailed description of the image structure. The features describe quantitative relations between some elemental structures. The features found on the training set are used for recognition, detection and visualization of the key structures and regions. The second method is based on image segmentation, design of topological description for regions of interest (ROIs) and calculation of spatial and textural parameters for segments being a part of the ROI. The training set of 56 images was used as source for finding threshold values of parameters and testing. Application of the methods is demonstrated as an example of diagnosing the head brain norm/pathology (18 CT images), large intestine diseases from 2D contour shape (178 x-ray images), and tumor recognizing from ultrasonic liver images. Software has been developed for IBM AT compatible computers.
Image resolution: the impact on finite mixture density models in medical applications
Howard Donald Gage, Fredrick H Fahey, William H. Hinson, et al.
Finite mixture density (FMD) based approaches to medical image classification or quantification problems have received considerable interest lately. In this paper, we will show through use of computer simulations that as the resolution of the underlying imaging modality decreases (its full width at half maximum (FWHM) increases) the successful application of an FMD approach will become increasingly difficult. A 19 slice computer phantom of the human brain was used. This phantom, generated from MR images of a human brain, is composed of gray matter, white matter, and cerebrospinal fluid regions. Image sets were generated using Gaussian kernels of various sizes and FWHM's. The distributions of single and multiple components pixels were then generated from these image sets. A planar acquisition of a single slice brain phantom is also presented for comparison. It is shown that, with decreasing image resolution, a major weakness of the FMD approach is its inability to incorporate spacial information. Decreasing resolution with respect to object size results in an increasing number of partial volume pixels with resulting effects on its FMD components.
Reducing respiratory artifacts in chest MR images through hybrid space motion tracking and postprocessing
John N. Campbell, Wesley E. Snyder, Peter Santago II, et al.
A new postprocessing method of correcting for respiratory motion induced artifacts in MRI is presented. The motion of the chest during respiration is modeled as a combination of translation and dilation. Displacements of the chest wall are tracked via a thin, MR-sensitive plate placed on the patient's chest during a scan. Scanning with phase encoding left/right (L/R) and frequency encoding anterior/posterior (A/P) causes the motion artifacts to be repeated in the L/R direction, thus not overlapping on the plate. By performing the inverse A/P Fourier transform, the resulting hybrid space data has A/P spatial data and L/R spatial frequency data, in which the motion of the plate is clearly visible as a nearly periodic waveform. Modeling the motion of the chest wall as an equal combination of translation and dilation allows corrections to the image to be make in k- space using properties of the Fourier transform and the measured displacement data. Noticeable reduction of the intensity of the motion artifacts is achieved, indicating the validity of the motion model and tracking method.
Correction of MRI artifact due to 2D translational motion in the image plane
Li Tang, Muneki Ohya, Yoshinobu Sato, et al.
A new algorithm for canceling MRI artifact due to translational motion in the image plane is described. Unlike the conventional iterative phase retrieval algorithm in which there is no guarantee for the convergence, a direct method for estimating the motion is proposed. In the previous approach, the motions in the readout (x-) direction and the phase encoding (y-) direction are estimated simultaneously. However, the feature of each x- and y- directional motion is different. By analyzing their features, each x- and y-directional motion is canceled by different algorithms in two steps. First, we notice that the x-directional motion corresponds to a shift of the x-directional spectrum of the MRI signal, and the non-zero area of the spectrum just corresponds to x-axis projected area of the density function. So the motion is estimated by tracing the edges of the spectrum, and the x-directional motion is canceled by shifting the spectrum in inverse direction. Next, the y-directional motion is canceled using a new constraint, with which the motion component and the true image component can be separated. The algorithm is shown to be effective by simulations.
Segmented MR images for brain attenuation correction in PET
Rozenn Le Goff-Rougetet, Vincent Frouin, Jean-Francois Mangin, et al.
We propose a method to calculate brain attenuation correction factors (ACF) for quantitative PET using MRI data in clinical protocols that require both modalities. In that case the elimination of the transmission scan will simplify the protocol and reduce the patient dose while preserving the accurate quantification. Moreover possible mispositioning between transmission and emission acquisitions, which is not usually accounted for, may be avoided.
Novel approach for image skeleton and distance transformation parallel algorithms
Kent Pu Qing, Robert W. Means
Image Understanding is more important in medical imaging than ever, particularly where real-time automatic inspection, screening and classification systems are installed. Skeleton and distance transformations are among the common operations that extract useful information from binary images and aid in Image Understanding. The distance transformation describes the objects in an image by labeling every pixel in each object with the distance to its nearest boundary. The skeleton algorithm starts from the distance transformation and finds the set of pixels that have a locally maximum label. The distance algorithm has to scan the entire image several times depending on the object width. For each pixel, the algorithm must access the neighboring pixels and find the maximum distance from the nearest boundary. It is a computational and memory access intensive procedure. In this paper, we propose a novel parallel approach to the distance transform and skeleton algorithms using the latest VLSI high- speed convolutional chips such as HNC's ViP. The algorithm speed is dependent on the object's width and takes (k + [(k-1)/3]) * 7 milliseconds for a 512 X 512 image with k being the maximum distance of the largest object. All objects in the image will be skeletonized at the same time in parallel.
Motion compensation algorithm for arbitrary translation of CT objects
Wen-Tai Lin
This paper presents a compensation technique for mitigating the blurring and streak artifacts in CT images caused by object translation. In order to detect and quantify the errors caused by motions, we first establish a nonsensor-based means to measure the incompleteness of a projection data set. Furthermore, with in-plane ray cancellation scheme, we show that it is possible to locate the regional motions. We then derive an isotropic reference, called centroid, to align parallel projections. The same concept is applied to fan beam projections except that here we use two sets of fan beam centroid to guide the projection alignment: one obtained from the original projection data set and the other from a pilot projection set. The latter is the reprojection of an iteratively improved image reconstructed from the updated projections. The convergence rate of this algorithm is demonstrated through simulations with projections based on mathematical phantom and human subject.
Novel histogram modification approach for medical image enhancement
Yongjian Bao
Adaptive histogram equalisation (AHE) has been successfully applied in medical image contrast enhancement for computer assistance in imaging diagnosis. To reduce the noise magnification effect it needs to clip the local histograms with a fixed factor, which however, also limits the contrast increment in useful signal regions. In this paper, a novel AHE algorithm has been developed to solve the dilemma by introducing a dynamic histogram clipping mechanism. In addition, we also propose a general algorithmic frame for the AHE method. The frame describes an AHE algorithm with three modules and thus allows the integration of various methods to these modules in the same environment.
Multiscale and two-loop strategies for speeding up segmentation via dynamic programming
Davi Geiger, Alok Gupta
In this paper, we present two strategies for speeding up our dynamic programming (DP) algorithm, presented in this proceedings. The algorithm is used for image segmentation starting with user specified initial points. The main drawback of DP is the long computational time. Therefore, we present two suboptimal strategies: (i) a multiscale approach, where the solution at a coarse scale gets propagated to fine scales, and (ii) a two-loop approach for closed contours. Since the user selected points are allowed to move we, (a) fix, arbitrarily one point to be the initial as well as the end point, among the selected points that are allowed to move, (b) run DP, (c) interchange the fixed point, and (d) run DP again. Both approaches together yield a factor of 50 on the speed, though at the expense of losing the optimality characteristic. The results applied to MRI left and right ventricle detection, to angiograms artery detections, and CTA bone segmentation are of excellent quality when compared to using the full DP.
Analysis of false-positive microcalcification clusters identified by a mammographic computer-aided detection scheme
Robert M. Nishikawa, Carl J. Vyborny, Maryellen Lissak Giger, et al.
The accuracy of computer-aided detection (CAD) schemes involves a tradeoff between high sensitivity and low false-positive rate. In an on-going study, we are analyzing our CAD scheme for the detection of clustered microcalcifications in digital mammograms to determine the causes of false-negative and false-positive clusters. Two different limitations that lead to false-negatives and false-positives have been identified. The first limitation is imposed by the quality of the digital mammogram, whereas the second is a consequence of the similarities of radiographic features between true and false clusters. In this paper, we examine the effects of image quality, particularly image noise, on the performance of our CAD scheme. Preliminary results indicate that the performance of our scheme is limited by anatomic noise and x-ray quantum noise. Almost all the false positives detected in clinical images by our CAD scheme are caused by a combination of these two forms of noise.
Narrow bandwidth spectral analysis of the textures of interstitial lung diseases
Brian Krasner, Shih-Chung Benedict Lo, Seong Ki Mun
The object of this study was to develop a classifier for distinguishing between regions-of-interest (ROIs) from normal lung radiographs and ROIs from radiographs showing interstitial lung disease. The method used was to estimate the covariance statistics of the ROIs of the lung interstitial space and, based on the estimate, to design filters for isolating statistically significant components of the spectrum. The energy of filtered images was used as a classifier. Additionally, the filtered images were analyzed and classified using a convolution neural network (CNN). The procedure used to generate the filters was: (1) Convert 2D neighborhoods of pixels to vectors. (2) Form the sample covariance matrix from the vectors. (3) Compute the eigenvectors and eigenvalues of the matrix. (4) Convert the eigenvectors back to 2D form and use as filters. The images selected for study included normal lungs, and lungs with different types and profusions of pneumoconiosis opacities. One group of ROIs of the interstitial space was used to design filters. Another group was used as a test of classification accuracy. The results showed that the designed classifier was effective in discriminating ROIs with small pneumoconiosis opacities from normal ROIs.
Left-ventricular boundary detection from short-axis echocardiograms: the use of active contour models
Accurate identification of boundaries of the left ventricle lets the cardiologist determine important physiological parameters like the left-ventricular ejection fraction, volume of the left ventricle, and regional heart wall thickening; all of which aid in better diagnosis of heart diseases. We have developed a new semi-automated method to determine the left-ventricular boundaries from short-axis echocardiograms. Our method is based on the active contour models, also know as snakes, originally proposed by Kass et al. Our method was tested on images obtained from 18 patients; manual outlining was used as reference for comparison. Our results were also compared to Detmer et at., who used the same images to test their algorithm. The errors in detecting boundaries by using our algorithm were found to be within the reproducibilty of manual outlining. We also implemented a 3D extension of the active contour algorithm, where the third dimension is time, and are currently working on a clinical validation of this algorithm. The 3D algorithm partly alleviates the problems encountered in the 2D algorithm due to missing boundaries in echocardiograms.
Neural networks in segmentation of mammographic microcalcifications
Farzin Aghdasi, Rabab K. Ward, Branko Palcic
Automatic detection and segmentation of microcalcifications may be achieved by application of algorithmic techniques or by use of artificial neural networks. We selected two neural network architectures and implemented object detection techniques on them. Further we have developed two algorithmic approaches to segment microcalcifications. In the first algorithm, thresholding of local image gray level histogram is used for object segmentation. In the first pass each object is labeled and object boundaries are marked but they are not segmented from the background. In the second pass the discontinuities due to region boundaries are corrected for, by allocating a unique threshold value for each object commensurate with the local background. In an alternative algorithm we employ edge detection to identify the pixels that may potentially belong to microcalcifications. Region growing techniques are then applied and the resulting segmented objects are subjected to tests involving shape, size and gradient.
High-magnification image reconstruction from partial views
Paolo Virgili, Giovanni Venturi, Andrea Crovetto, et al.
Observing specimen images at high magnification is often needed for histological analysis. Unfortunately, the higher the magnification, the more limited the field under observation. A single sample must be subdivided into several images, each representing a partial view, so that the global view and the geometrical correspondences among image components are missed. When using a conventional microscope, not equipped with a mechanical scanning mechanism, it is possible to overcome this drawback by exploited digital processing facilities. To this end, we have developed a method for high-magnification image reconstruction from partial views. Digital images of partial views from a single specimen are acquired and processed with image processing algorithms, in order to correct distortions and to eliminate overlapping. A global high-magnification image of the specimen is thus reconstructed.
Detecting and reconstructing vascular trees in retinal images
Piotr Jasiobedzki, Christopher K. I. Williams, Feng Lu
Reconstruction of the vascular tree in retinal (ocular fundus) images is important, because it yields information such as the shape and size of individual vessels, their branching pattern and arterio-venous crossings, thereby providing information on the condition of the retina. The vascular tree is also helpful in the registration of retinal images. In this paper we describe an automated technique for detecting and reconstructing vascular trees, based on a robust detection of vessel candidates (ribbonlike features), their labelling using a neural network (NN), and a final reconstruction of the vessel tree using these labels. The NN uses vessel models automatically built during a training phase and does not rely on any explicit user specified models or sets of features.
Cervical surface shape recovery using digital imaging colposcopy
John R. Engel, Eric R. Craine, Brian L. Craine M.D., et al.
A common application of digital imaging colposcopy in cervical examinations is the measurement of lesion dimensions and areas. Typically this is done by interactively marking the region of interest on a cervix image, calculating the corresponding pixel dimensions and then scaling to the colposcope optics. Until now no one has suggested a solution to the effects of the cervical surface slant on these measurements. Away from the cervical os the surface slant is large and lesion dimensions there will be underestimated, possibly leading to a misinterpretation of the lesion's progression. In this paper we discuss a noninvasive method for determining the surface geometry of the cervix using digital imaging colposcopy. The method is an application of shape-from-shading techniques used to determine the surface slant at all points in the cervix image. From the surface slant we can calculate area corrections to measurements made on the image. In our initial investigations we have applied this method to area measurements of circular regions drawn on spherical test targets. Our results indicate that we can obtain improvements in area measurement errors of factors between 3 and 6, resulting in relative errors of a few percent.
Macro-driven semiautomation of routine medical imaging segmentation tasks
James E. Cabral Jr., Keith S. White M.D., Yongmin Kim
We have achieved a significant decrease in time required to segment certain image types through the use of semiautomated macros. These macros reduce much of the tedium of using segmentation tools by providing an initial first pass at segmentation to be followed by final interactive segmentation by a radiologist. The macros are selected based on image type and target tissue and rely on signal characteristics inherent to the image type and tissue. Techniques such as this to reduce the time requirements for segmentation of medical images are essential for mainstream acceptance of computer-assisted segmentation tools into the clinical environment.
New 3D model for dynamics modeling
Alain Perez
The wrist articulation represents one of the most complex mechanical systems of the human body. It is composed of eight bones rolling and sliding along their surface and along the faces of the five metacarpals of the hand and the two bones of the arm. The wrist dynamics are however fundamental for the hand movement, but it is so complex that it still remains incompletely explored. This work is a part of a new concept of computer-assisted surgery, which consists in developing computer models to perfect surgery acts by predicting their consequences. The modeling of the wrist dynamics are based first on the static model of its bones in three dimensions. This 3D model must optimise the collision detection procedure which is the necessary step to estimate the physical contact constraints. As many other possible computer vision models do not fit with enough precision to this problem, a new 3D model has been developed thanks to the median axis of the digital distance map of the bones reconstructed volume. The collision detection procedure is then simplified for contacts are detected between spheres. The experiment of this original 3D dynamic model products realistic computer animation images of solids in contact. It is now necessary to detect ligaments on digital medical images and to model them in order to complete a wrist model.
Image processing of storage phosphor musculoskeletal radiographs: a comparison of the AGFA and Fuji bone algorithms
Martha C. Nelson M.D., Matthew T. Freedman M.D., Einar V. Pe, et al.
Storage phosphor radiography of the musculoskeletal system offers significant advantages for most potential diseases of the skeleton. Proper use of the system is however essential if its value is to equal or exceed that of conventional screen film radiography. There are two competing storage phosphor imaging commercial systems currently designed to accommodate musculoskeletal images. The Fuji and the AGFA systems. The image processing methods for the Fuji and AGFA machines differ substantially from each other and there are advantages and disadvantages of each system.
Automated detection of clustered microcalcifications in digital mammograms using wavelet processing techniques
A computerized scheme for the automated detection of clustered microcalcifications in digital mammograms is being developed. This scheme is part of an overall package for computer-aided diagnosis (CAD), the purpose of which is to assist radiologists in detecting and diagnosing breast cancer. One important step in the computer detection scheme is to increase the signal-to-noise ratio of microcalcifications by suppressing the background structure of the breast image in order to increase the sensitivity and/or to reduce the false-positive rate. To achieve this, we employ an approach using the wavelet transform in this paper. Digitized mammograms are decomposed by using the wavelt transform and then reconstructed from transform coefficients modified at several levels in the transform space. Various types of wavelets are examined, and the Least Asymmetric Daubechies' wavelets are chosen to detect clustered microcalcifications in mammograms. The images reconstructed from several different scales are subjected to our CAD scheme, and their performances are evaluated using 39 mamograms containing 41 clusters. Preliminary results show a sensitvity of appromimately 85 percent with a flase-positive rate of 5 clusters per image.
Quantitative analysis of phantom images in mammography
Michael P. Eckert, Dev Prasad Chakraborty
We asked a number of readers to evaluate 28 images of the ACR phantom acquired under a broad range of conditions (varying kVp, mAs, grid, no-grid, scatter materials). The phantom contains three types of structures: fibrils, microcalcification groups, and masses. The evaluation was performed according to the standard ACR criteria (i.e. counting the number of visible structures). The resulting scores were averaged across readers to obtain the average number of fibers, masses, and microcalcification groups seen for each image. The images were digitized and analzyed to obtain values for the noise level, background pixel value, and the contrast of image structures. We then found the linear combination of the image measurements which best predicted the reader scores. The variablity of the reader scores and variablity of the computer measures were also analyzed. We found that the computer measures of image contrast provide a good prediction of observer scores, have much less variablity than the observer scores, are straightforward to obtain, and are reproducible.
Workshop on Applications of Object-Oriented Modeling
icon_mobile_dropdown
Object-oriented data model for skeletal development
Ricky K. Taira, Alfonso F. Cardenas, Wesley W. Chu, et al.
This paper describes our research toward the development of an intelligent database management system that supports queries based on image content, queries based on evolutionary processes, and queries that use imprecise medical terms. We use an extended object-oriented data model that includes novel temporal and evolutionary modeling constructs. Our initial clinical application concentrates on characterizing the development of the human hand. Our data model demonstrates the need for medical scientific databases to include the concepts of object life spans, object creation (e.g., bone ossification centers), the fusion of objects (e.g., metaphysis and epiphysis), the fission of an object, the gross transformation of object properties, and object inheritance involving entities that exist in various time- space domains.
Object-oriented implementation of a graphical-programming system
Gregory S. Cunningham, Kenneth M. Hanson, G. R. Jennings Jr., et al.
Object-oriented (OO) analysis, design, and programming is a powerful paradigm for creating software that is easily understood, modified, and maintained. In this paper we demonstrate how the OO concepts of abstraction, inheritance, encapsulation, polymorphism, and dynamic binding have aided in the design of a graphical-programming tool. The tool that we have developed allows a user to build radiographic system models for computing simulated radiographic data. It will eventually be used to perform Bayesion reconstructions of objects given radiographic data. The models are built by connecting icons that represent physical transformations, such as line integrals, exponentiation, and convolution, on a canvas. We will also briefly discuss ParcPlace's application development environment, VisualWorks, which we have found to be as helpful as the OO paradigm.
Applications of object-oriented modeling: an overview
Even a casual reading of contemporary computer literature indicates that a lot of attention is being devoted to the object-oriented (00) approach. It is clear that all of us will be using 00 sometime in the future, if for no other reason than the arguable assertion that all major computer operating systems will eventually employ objects at their very core. The intent of this workshop is to draw together those who are employing 00 in their medical-imaging research so that they may benefit from knowing who else is engaged in 00 methodology. A secondary goal is to acquaint those who know little about 00 with the basic concepts.