Segmentation of neuroanatomy in magnetic resonance images
Author(s):
Andrew Simmons;
Simon Robert Arridge;
G. J. Barker;
Paul S. Tofts
Show Abstract
Segmentation in neurological magnetic resonance imaging (MRI) is necessary for feature extraction, volume measurement and for the three-dimensional display of neuroanatomy. Automated and semi-automated methods offer considerable advantages over manual methods because of their lack of subjectivity, their data reduction capabilities, and the time savings they give. We have used dual echo multi-slice spin-echo data sets which take advantage of the intrinsically multispectral nature of MRI. As a pre-processing step, a rf non-uniformity correction is applied and if the data is noisy the images are smoothed using a non-isotropic blurring method. Edge-based processing is used to identify the skin (the major outer contour) and the eyes. Edge-focusing has been used to significantly simplify edge images and thus allow simple postprocessing to pick out the brain contour in each slice of the data set. Edge- focusing is a technique which locates significant edges using a high degree of smoothing at a coarse level and tracks these edges to a fine level where the edges can be determined with high positional accuracy. Both 2-D and 3-D edge-detection methods have been compared. Once isolated, the brain is further processed to identify CSF, and, depending upon the MR pulse sequence used, the brain itself may be sub-divided into gray matter and white matter using semi-automatic contrast enhancement and clustering methods.
Optimal stochastic pyramid: segmentation of MRI data
Author(s):
Christophe Mathieu;
Isabelle E. Magnin;
C. Baldy-Porcher
Show Abstract
After the description of a graph pyramid used for image processing -- a stochastic pyramid -- a new approach to the translation of an image into a graph is presented. It is shown that the use of a minimal spanning tree in this process improves the results of the segmentation. Some experimental results are presented on synthetic image and on a MRI data of the heart. They are obtained with the stochastic pyramids, with the standard transformation of the image, and with the minimal spanning tree. Finally, a filtering pyramidal algorithm is proposed, using the properties of the minimal spanning tree.
Application of a new pyramidal segmentation algorithm to medical images
Author(s):
Nicholas J. Mankovich;
Lenny I. Rudin;
Georges Koepfler;
Jean-Michel Morel;
Stanley Osher
Show Abstract
This paper tests a new, fully automated image segmentation algorithm and compares its results with conventional threshold-based edge detection techniques. A CT phantom-based method is used to measure the precision and accuracy of the new algorithm in comparison to two edge detection variants. These algorithms offer a high degree of noise and differential lighting immunity and allow multi-channel image data, making them ideal candidates for multi-echo MRI sequences. The algorithm considered in this paper employs a fast numerical method for energy minimization of the free boundary problem that can incorporate regional image characteristics such as texture or other scale-specific features. It relies on a recursive region merge operation, thus providing a series of nested segmentations. In addition to the phantom testing, we discuss the results of this fast, multiscale, pyramidal segmentation algorithm applied to MRI images. The CT phantom segmentation is measured by the geometric fidelity of the extracted measurements to the geometry of the original bone components. The algorithm performed well in phantom experiments, demonstrating an average four-fold reduction in the error associated with estimating the radius of a small bone although the standard deviation of the estimate was almost twice that of the edge detection techniques. Modifications are proposed which further improve the geometric measurements. Finally, the results on soft-tissue discrimination are promising, and we are continuing to enhance the core formulation to improve the segmentation of complex shaped regions.
Adaptive textural segmentation of medical images
Author(s):
Walter S. Kuklinski;
Gordon S. Frost;
Thomas MacLaughlin
Show Abstract
A number of important problems in medical imaging can be described as segmentation problems. Previous fractal-based image segmentation algorithms have used either the local fractal dimension alone or the local fractal dimension and the corresponding image intensity as features for subsequent pattern recognition algorithms. An image segmentation algorithm that utilized the local fractal dimension, image intensity, and the correlation coefficient of the local fractal dimension regression analysis computation, to produce a three-dimension feature space that was partitioned to identify specific pixels of dental radiographs as being either bone, teeth, or a boundary between bone and teeth also has been reported. In this work we formulated the segmentation process as a configurational optimization problem and discuss the application of simulated annealing optimization methods to the solution of this specific optimization problem. The configurational optimization method allows information about both, the degree of correspondence between a candidate segment and an assumed textural model, and morphological information about the candidate segment to be used in the segmentation process. To apply this configurational optimization technique with a fractal textural model however, requires the estimation of the fractal dimension of an irregularly shaped candidate segment. The potential utility of a discrete Gerchberg-Papoulis bandlimited extrapolation algorithm to the estimation of the fractal dimension of an irregularly shaped candidate segment is also discussed.
Multiple-resolution segmentation of tone-dominated image
Author(s):
Tianhu Lei;
Zuo Zhao
Show Abstract
A new multiple resolution segmentation (MRS) technique for the tone-dominated images is presented. Pyramid structure is established to provide a series of images with the reduced size and the reduced resolution. A stochastic model-based image analysis technique -- our previous research work known as fixed resolution segmentation (FRS) -- is first applied at a selected higher level of pyramid, i.e., on an image with the coarse resolution. Then, the results obtained at this level are guided to the next (lower) level, i.e., the image with the finer resolution. This level-to-level downward operation is repeated until the base level of pyramid, i.e., the image with the original resolution is reached. Our studies show that MRS outperforms FRS in terms of its accuracy and speed. Examples are demonstrated and some theoretical analysis are included.
Segmentation algorithms for cranial magnetic resonance images
Author(s):
Raj S. Acharya;
Y. M. Ma
Show Abstract
In this paper, we provide a model-based algorithm for segmentation of MRI brain images. The algorithm uses embedded knowledge and employs generalized morphological operators. Preliminary results obtained with the use of the algorithm to segment MRI brain scans are also presented.
Parallel implementation of an adaptive split-and-merge method for medical image segmentation
Author(s):
Jin-Shin Chou;
Chin-Tu Chen;
Shiuh-Yung James Chen;
Wei-Chung Lin
Show Abstract
We have developed an adaptive split-and-merge method for a medical image segmentation algorithm and investigated its parallel implementation. The key process of the segmentation method, i.e., the test of region homogeneity, is carried out by means of a localized feature analysis technique and statistical tests. The feature analysis technique combines a co- occurrence matrix and the histogram of its near-diagonal elements for calculating the threshold values of the average standard deviation, gray-level contrast, and likelihood ratio. We then use these values as constraints in the statistical hypothesis tests to determine whether two regions should be split or merged in the final region formation. The strength of the proposed method is that all the required parameters in the algorithm are computed automatically and depend only on the context of the image under analysis. The calculation of the feature measurements in this algorithm is window-based; the value computed for each pixel is a function of its neighboring pixels, the computation time is enormous and is inherently suitable for parallel implementation. We have incorporated the proposed algorithm by using an AT&T Pixel Machine. The parallelization can be done by division of image into 8 X 8 blocks of equal size and some boundary pixels that overlap with four neighboring pixel nodes. The preliminary result shows that the saving in computation time with this parallelized implementation is significant.
Brain tissue classification from MRI data by means of texture analysis
Author(s):
Frederic Lachmann;
Christian Barillot
Show Abstract
The new magnetic resonance imaging systems (MRI) are able to perform a brain scan with fairly good three-dimensional resolution. In order to allow the physician, and especially the neuroanatomist, to deal with the prime information borne by the images, the prevalent data have to be enhanced with regards to the medical objective. The aim of the work presented in this paper is to recognize and to label the head structures from MR images. This is done by computing probabilities for a pixel to belong to pre-specified head structures (i.e., skin, bone, CSF, ventricular system, grey and white matter, and brain). Several ways are presented and discussed in this paper, including the computation of statistical properties like `Markov parameters' and `fractal dimension.' From these statistical parameters, computed from a single MR image or a 3-D isotropic MR database, clustering and classification processes are used to issue fuzzy membership coefficients representing the probabilities for a pixel to belong to a particular structure. Improvements are proposed with regard to the expressed choices and examples are presented.
Quantification of brain tissue through incorporation of partial volume effects
Author(s):
Howard Donald Gage;
Peter Santago II;
Wesley E. Snyder
Show Abstract
This research addresses the problem of automatically quantifying the various types of brain tissue, CSF, white matter, and gray matter, using T1-weighted magnetic resonance images. The method employs a statistical model of the noise and partial volume effect and fits the derived probability density function to that of the data. Following this fit, the optimal decision points can be found for the materials and thus they can be quantified. Emphasis is placed on repeatable results for which a confidence in the solution might be measured. Results are presented assuming a single Gaussian noise source and a uniform distribution of partial volume pixels for both simulated and actual data. Thus far results have been mixed, with no clear advantage being shown in taking into account partial volume effects. Due to the fitting problem being ill-conditioned, it is not yet clear whether these results are due to problems with the model or the method of solution.
Quantitative analysis of brain magnetic resonance imaging for hepatic encephalopathy
Author(s):
Hon-Wei Syh;
Wei-Kom Chu;
Chin-Sing Ong
Show Abstract
High intensity lesions around ventricles have recently been observed in T1-weighted brain magnetic resonance images for patients suffering hepatic encephalopathy. The exact etiology that causes magnetic resonance imaging (MRI) gray scale changes has not been totally understood. The objective of our study was to investigate, through quantitative means, (1) the amount of changes to brain white matter due to the disease process, and (2) the extent and distribution of these high intensity lesions, since it is believed that the abnormality may not be entirely limited to the white matter only. Eleven patients with proven haptic encephalopathy and three normal persons without any evidence of liver abnormality constituted our current data base. Trans-axial, sagittal, and coronal brain MRI were obtained on a 1.5 Tesla scanner. All processing was carried out on a microcomputer-based image analysis system in an off-line manner. Histograms were decomposed into regular brain tissues and lesions. Gray scale ranges coded as lesion were then brought back to original images to identify distribution of abnormality. Our results indicated the disease process involved pallidus, mesencephalon, and subthalamic regions.
Automated detection and quantification of multiple sclerosis lesions in MR volumes of the brain
Author(s):
Ross Mitchell;
Stephen J. Karlik;
Donald H. Lee M.D.;
Aaron Fenster
Show Abstract
MRI is the principle technique for the diagnosis of multiple sclerosis. However, manually quantifying the number and extent of lesions in MR images is arduous. Therefore, we are developing a computerized 3-D quantitative system to determine the lesions' extent and their changes in time. Our system uses proton density (PD) and T2 weighted MR volumes. A 2-D histogram showing the frequency of voxels with particular PD and T2 weighted intensities reveals that white matter, grey matter (GM), and cerebro-spinal fluid voxels correspond to distinct clusters in this histogram and can be classified on this basis. However, many true MS lesion voxels have PD and T2 weighted intensities similar to GM. Therefore, on the basis of location in the histogram alone, it is difficult to differentiate all lesions voxels from GM voxels. However, some lesions have a distinctive `peak' in the 2-D histogram which can be used to identify them successfully. Using this system it is possible to assess and monitor changes in time for these lesions. To demonstrate this ability, four MR examinations of a single chronic-progressive MS patient obtained over a 510 day period were analyzed using our system. Three-dimensional volume rendering and measurement of the results clearly shows changes in lesion shape, position, and size.
Angiographic display method for flow-enhanced MRI
Author(s):
Nola M. Hylton
Show Abstract
Magnetic resonance angiography (MRA) is a classification of MRI techniques that produce images of vascular structures. By design, MRA methods enhance contrast between flowing blood and stationary tissue. In bright blood techniques, vascular contrast is maximized using gradient echo sequences with flow compensated magnetic field gradients and short TE values, reduced TR and an imaging slice or slab oriented to maximize blood in-flow. Unlike the direct projection images formed by the transmitted x ray beam in conventional angiography, many of the MRA techniques generate three-dimensional image data and are subsequently processed into projection format. The most popular post-processing algorithm for this purpose takes advantage of the high vascular contrast of MRA data and simply chooses the brightest pixel intensity along lines of projection through the three-dimensional data set to map onto a two- dimensional surface. This method is called the maximum intensity projection (MIP) and produces high contrast images in which the anatomical arrangement of vascular structures can be easily appreciated. However, while MIP processing makes the anatomy readily apparent, it has a non-physical intensity behavior and does not have the relationship to density that is familiar to the reader of radiographic films. In addition, the MIP tends to underestimate vessel width and overestimate the extent of stenosis. A number of alternative projection algorithms as well as surface and volume rendering techniques have been proposed to overcome the drawbacks of the MIP, but the MIP has remained the most used method because of its high vascular contrast and S/N, ease of implementation, robustness, and speed. With this in mind, these qualities were some of the prerequisites of a new projection method, the weighted intensity summation projection (WISP) technique. In the WISP projection, intensity is related to the vessel dimension by computing a line integral over the projected thickness. High vascular contrast is maintained by using an intensity weighting function designed to minimize the contribution from stationary tissue, limit the contribution of the brightest structures, and retain low intensity vascular features found at vessel edges and in small diameter vessels. The weighting function is completely parameterized by the intensity histogram of the projected volume and requires no optimization on the part of the user for individual studies.
Quantization techniques for the compression of chest images by JPEG-type algorithms
Author(s):
Walter F. Good;
David Gur
Show Abstract
The Joint Photographic Expert Group (JPEG) compression standard specifies a quantization procedure but does not specify a particular quantization table. In addition, there are quantization procedures which are effectively compatible with the standard but do not adhere to the simple quantization scheme described therein. These are important considerations, since it is the quantization procedure that primarily determines the compression ratio as well as the kind of information lost or artifacts introduced. A study has been conducted of issues related to the design of quantization techniques tailored for the compression of 12-bit chest images in radiology. Psycho-physical based quantization alone may not be optimal for images that are to be compressed and then used for primary diagnosis. Two specific examples of auxiliary techniques which can be used in conjunction with JPEG compression are presented here. In particular, preprocessing of the source image is shown to be advantageous under certain circumstances. In contrast, a proposed quantization technique in which isolated nonzero coefficients are removed has been shown to be generally detrimental. Image quality here is primarily measured by mean square error (MSE), although this study is in anticipation of more relevant reader performance studies of compression.
Full-frame image compression for nonsquare images
Author(s):
Doris T. Chin;
Bruce Kuo Ting Ho;
Marco Ma;
H. K. Huang
Show Abstract
Full frame cosine transform method for image compression offers enormous advantages over corresponding block techniques in radiological applications. However, previously developed full frame compression algorithm is restricted to square images with dimensions equal to a power of two (512, 1024, etc.). The work presented here extends the same algorithm for applications involving arbitrary dimensions resulting in significant reduction in the compressed data size.
Biomagnetic image reconstruction using the method of alternating projections
Author(s):
Ceon Ramon;
Seho Oh;
Michael G. Meyer;
Robert Jackson Marks II
Show Abstract
Reconstruction of current distributions was performed by solving the magnetic inverse problem using the minimum norm techniques. It provides rough images of current distribution with some noise. Improvement of the resolution and removal of noise from the reconstructed images was done by using alternating projections. The procedure assumes that images can be represented by line-like elements and involves finding the line-like elements based on the initial image and projecting back onto the original solution space. Simulation studies were performed on a (lambda) shaped conductor and a shape of the conductors in the form of letters, UWBC. All conductors were of line-like thickness. In all cases the initial reconstruction produced a good representation of the conductors with some noise, and with larger width of conductor. Alternating projection technique was applied and in each case the original shape and the line-like thickness of the conductors was recovered.
Functional image reconstruction enhancement for MR spectroscopic and nuclear medicine images
Author(s):
Ernest M. Stokely;
Donald B. Twieg
Show Abstract
Functional images in medicine, such as phosphorus magnetic resonance spectroscopic imaging (SI) images or perfusion studies in nuclear medicine (NM) using 99mTc-HMPAO, are low in resolution compared to x-ray CT or proton MR (anatomic) images. This paper describes an improved, rapid method for enhancing the accuracy and resolution of functional medical images. While both functional and anatomic images are often available for the organ under study, few attempts have been made to use the a priori information in the anatomic images to improve the poor resolution of the corresponding functional images. The proposed technique assumes that compartments can be identified in a high resolution anatomic image of the region under study, and each of these compartments is assumed to contain a spatially heterogeneous concentration of metabolite. The spatial variation of the metabolite is modelled by a series expansion. Application of the method is derived for both MR spectroscopic images, and scintigrams. Noise-free and noisy simulation studies of spectroscopic images are presented which show that the method is robust in the presence of noise, and also when the assumed model is mismatched to the function which describes the actual metabolite compartmental concentration.
Echo imaging with beam-steered data
Author(s):
Mehrdad Soumekh
Show Abstract
This paper presents a system model and inversion for the beam-steered data obtained via linearly varying the relative phase among the elements of an array, also known as phased array scan data. The system model and inversion incorporate the radiation pattern of the array's elements. The inversion method utilizes the time samples of the echoed signals for each scan angle instead of range focusing. It is shown that the temporal Fourier transform of the phased array scan data provides the distribution of the spatial Fourier transform of the reflectivity function for the medium to be imaged. It is shown that the imaging information obtained via the inversion of phased array scan data is equivalent to the image reconstructed from its synthesized array counterpart.
Intensity-sorting algorithm for generating MR angiograms
Author(s):
Steven Schreiner;
Robert L. Galloway Jr.;
Charles A. Edwards II;
Judith G. Thomas
Show Abstract
Maximum-intensity projection (MIP) algorithms are currently used for construction of magnetic resonance (MR) angiograms. In this application, projections calculated at different angles through the image are used to form a cine loop, thus providing a three-dimensional representation from which vascular structure may be deciphered. MIP algorithms cast parallel rays through the MR image volume which has been acquired such that the flow within the vasculature has the highest intensities. The maximum voxel intensity along each ray is placed on the projection plane where the ray meets the plane. Thus, the flow within the vasculature shows up in the projection plane. This research strives to discover methods of reducing the projection calculation time, thus making the technology more accessible to users of less powerful systems. A novel approach was developed for calculating projections in which each image slice was pre-sorted into bins of intensities. By thresholding the intensities used, the background pixels can be ignored and only those intensities that relate to flow are used in the projection. Thresholding reduces the total number of pixels considered for the projection plane, thereby saving calculation time. Additional time savings resulted from precalculating projection `templates' and filling multiple projection planes at the same time. The algorithms were written in C on a 80386-based system. The new algorithm demonstrated more than a seven-fold increase in projection calculation speed over a benchmark algorithm.
Reconstruction technique for focal spot wobbling
Author(s):
Jiang Hsieh;
Michael A. Gard;
S. Gravelle
Show Abstract
Focal spot wobbling provides a means of obtaining doubled spatial sampling in third generation computed tomography (CT) scanners. In this paper, we present a reconstruction technique which makes use of the fact that the spatial sampling interval has been halved. The proposed scheme improves system resolution and reduces the amount of computation.
Reconstruction based on flexible prior models
Author(s):
Kenneth M. Hanson
Show Abstract
A new approach to Bayesian reconstruction is introduced in which the prior probability distribution is endowed with an inherent geometrical flexibility. This flexibility is achieved through a warping of the coordinate system of the prior distribution into that of the reconstruction. This warping allows various degrees of mismatch between the assumed prior distribution and the actual distribution corresponding to the available measurements. The extent of the mismatch is readily controlled through constraints placed on the warp parameters.
Fusion of fuzzy-type image modalities
Author(s):
Peter F. Jensch
Show Abstract
Tremendous efforts have been made in the evaluation of images to extract organs, vessels, or objects. Image segmentation succeeded only under certain constraints. A more realistic approach is based on the acceptance of the fuzziness of image data, i.e., coronary angiograms provide (nearly precise) information on anatomy and perfusion. SPECT and PET scans reveal (fuzzy) information on blood flow and metabolic functions within an organ. A fusion of these modalities needs a normalization procedure, i.e., mapping to the same type of information, either precise or fuzzy. This paper describes segmentation and fusion processes which are based on successive approximation guided by mathematical morphology procedures and supported by neural networks and fuzzy inference. Results are obtained from myocardial images as well as from lever images.
PET reconstruction using sensor fusion techniques: neural network approach
Author(s):
Raj S. Acharya;
Carlos Hinojosa
Show Abstract
In this paper we propose a neural network approach to PET reconstruction using sensor fusion concepts. A simple method of image fusion is presented in the supervised learning mode of the network where the supervisor signal provides the image information to be integrated with the reconstructed PET image.
Multimodality medical image interpretation system: implementation and integration
Author(s):
Anne-Marie Forte;
Yves J. Bizais
Show Abstract
An intelligent image interpretation system should be able to help the physician during diagnosis by taking into account medical imaging specificities: (1) various domains of knowledge -- medical scene, acquisition, image processing (IP), interpretation; (2) complex IP procedures which provide relevant diagnostic information, particularly multimodality medical imaging procedures. In this context, we are designing a multimodality medical image interpretation system (MMIIS) involving different expert systems and procedural actors: (1) The medical image database, with an image object formalism. Proper image data are managed as files, image data on which requests can be made, in relational DBs, inference image information in knowledge bases. (2) The user interface we chose was as standard as possible to build a portable system (OSF/Motif). (3) Expert systems and particularly the IP expert system, with specific characteristics, describing knowledge about IP procedures (comparing an object-oriented and a Prolog-based implementation). (4) The IP actor, structured thanks to an IP classification. (5) Communication interfaces, realizing the integration of the above components. They are necessary to achieve homogeneity throughout the system. The complexity of the whole system is due to the complexity of implementing each module as well as their integration. The architecture of the integrated MMIIS is presented as well as the functionality, implementation, and interaction of its various components.
Object-oriented versus logical conventional implementation of a MMIIS
Author(s):
Anne-Marie Forte;
Maurice Bernadet;
Franck Lavaire;
Yves J. Bizais
Show Abstract
The main components of a multimodality medical image interpretation system (MMIIS) are: (1) a user interface, (2) an image database, storing image objects along with their description, (3) expert systems (ES) in various medical imaging domains and particularly in image processing (IP), and (4) an IP actor, toolbox of standard IP procedures. To implement such a system, we are building two prototypes: one with an object-oriented (OO) expert system and one with a classical logical expert system. In these two different approaches, we have to model the medical imaging objects and represent them. Both approaches use an OO data model even if its implementation is different in: (1) the characteristics of each ES, in managing knowledge and inferences (uncertainty, non-monotonicity, backward and forward chaining, meta- knowledge), (2) the environment to implement the different experts and to activate IP procedures, and (3) the communication means between the experts and the other components. In the OO approach, an ES based on smalltalk is used, and in the conventional one an adhoc Prolog ES was built. Our goal is to compare their advantages and disadvantages in implementing a MMIIS.
Multiresolution analysis of vertex curves and watershed boundaries
Author(s):
John M. Gauch;
Stephen M. Pizer
Show Abstract
This paper presents two methods to identify and analyze geometric structures in grey-scale images: vertex curves and watershed boundaries. Both of these geometric representations capture interesting properties of ridges and valleys in an image and are related to the intensity axis of symmetry (IAS). The multiresolution behavior of these image shape descriptions can be used to impose a scale-based hierarchy on ridges and valleys in the image. This hierarchy can be utilized in top-down or bottom-up analysis of image structure. Robust methods to calculate these geometric representations are also described.
Local operator for computing curvature
Author(s):
Ernest M. Stokely;
Elizabeth Mazorra
Show Abstract
Several approaches to medical image understanding require the measurement of curvature along the object boundary. Conventional methods for determining the curvature of an edge involve object segmentation and boundary tracking techniques. In this report, a local operator is described which computes curvature directly from either a gray scale or binary image without explicit detection of the edge. The operator defines two concentric circular boundaries which are used to map underlying pixel intensities into two 1-D functions. These functions are fit with a square wave which is optimum in the least-squares sense, providing four candidate points on an iso-intensity contour. By fitting a circle to this boundary, the curvature of the iso- intensity contour can be found as (kappa) equals 1/radius. A number of design questions are addressed, including the question of the correct relative size of the two concentric circles, and the method and threshold for rejecting uncertain determinations of the isocontour points due to noise. The operator is tested using simulated images, planar radiographs, and MR proton density images. Use of the operator in determining isocontours which have a specified curvature is demonstrated.
Three-dimensional model-guided segmentation and analysis of medical images
Author(s):
Louis K. Arata;
Atam P. Dhawan;
Joseph Broderick;
Mary Gaskill M.D.
Show Abstract
Automated or semi-automated analysis and labeling of structural brain images, such as magnetic resonance (MR) and computed tomography, is desirable for a number of reasons. Quantification of brain volumes can aid in the study of various diseases and the affect of various drug regimes. A labeled structural image, when registered with a functional image such as positron emission tomography or single photon emission computed tomography, allows the quantification of activity in various brain subvolumes such as the major lobes. Because even low resolution scans (7.5 to 8.0 mm slices) have 15 to 17 slices in order to image the entire head of the subject hand segmentation of these slices is a very laborious process. However, because of the spatial complexity of many of the brain structures notably the ventricles, automatic segmentation is not a simple undertaking. In order to accurately segment a structure such as the ventricles we must have a model of equal complexity to guide the segmentation. Also, we must have a model which can incorporate the variability among different subjects from a pre-specified group. Analysis of MR brain scans is accomplished by utilizing the data from T2 weighted and proton density images to isolate the regions of interest. Identification is then done automatically with the aid of a composite model formed from the operator assisted segmentation of MR scans of subjects from the same group. We describe the construction of the model and demonstrate its use in the segmentation and labeling of the ventricles in the brain.
Quantitative analysis of cerebral images using an elastically deformable atlas: theory and validation
Author(s):
James C. Gee;
Martin Reivich M.D.;
Ruzena K. Bajcsy
Show Abstract
A method of anatomical localization by elastically deforming a three-dimensional atlas to match the anatomic brain image volume of a subject is described. The anatomic atlas is modeled as an elastic object and the matching process is formulated as a minimization of the cost function, cost equals cost(deformation) - cost(similarity). The system uses a multiresolution deformation scheme to accelerate and improve the convergence of the matching process. To validate the system, six deformed versions of an atlas were generated. The atlas was then matched to its deformed versions. The accuracy of the matches was evaluated by determining the correspondence of several cortical and subcortical regions. The system on average matched the centroid of a region to within 1 mm of its true position and fit a region to within 11% of its true volume. The mean overlap between the matched and true regions, defined by the ratio between the volume of their intersection and the volume of their union, was 66% ((sigma) equals 16%). Each match was performed three times and the results in all six cases were reproducible. The results of the preliminary validation of the elastic matching technique are promising and show that the method can account for local shape differences in brain anatomy.
Registration of multimodal volume head images via attached markers
Author(s):
Venkateswara R. Mandava;
J. Michael Fitzpatrick;
Calvin R. Maurer Jr.;
Robert J. Maciunas;
George S. Allen
Show Abstract
We investigate the accuracy of registering arbitrarily oriented, multimodal, volume images of the human head, both to other images and to physical space, by aligning a configuration of three or more fiducial points that are the centers of attached markers. To compute the centers we use an extension of an adaptive thresholding algorithm due to Kittler. Because the markers are indistinguishable it is necessary to establish their correspondence between images. We have evaluated geometric matching algorithms for this purpose. The inherent errors in fiducial localization arising with digital images limits the accuracy with which anatomical targets can be registered. To accommodate this error we apply a least-squares registration algorithm to the fiducials. To evaluate the resulting target registration accuracy we have conducted experiments on images of internally implanted markers in a cadaver and images of externally attached markers in volunteers. We have also produced computer simulations of volume images of a hemispherical model of the head, randomly picking corresponding fiducial points and targets in the images, introducing uniformly distributed error into the fiducial locations, registering the images, and measuring target registration accuracy at the 95% confidence level. Our results indicate that submillimetric accuracy is feasible for high resolution images with four markers.
Texture-based classification of cell imagery
Author(s):
Belur V. Dasarathy
Show Abstract
The paper presents the results of application of different supervised (such as class pair-wise hyperplane learning and nearest neighbor) and unsupervised (such as distance based cluster analysis) classification techniques to cell imagery data using certain newly developed textural features. The effectiveness of these joint run length -- gray level distribution based textural descriptors as features for classification of cell image data both in supervised and unsupervised modes is illustrated with actual data drawn from four specific groups of cells: (1) lymphoma, (2) dermis collagen, (3) infiltrating lobular carcinoma, and (4) infiltrating scirrhous carcinoma. As is to be expected from theoretical considerations, supervised classification does indeed provide better results. However, even in the unsupervised classification mode, the results are very close to that obtained under the supervised mode, thus demonstrating the merits of the new textural features in the classification of cell imagery data. This is further confirmed through feature evaluation and assessment based on derivation of figures of merit for the discrimination potential of the newly defined textural features. Results of application of a recently proposed method of minimizing the training set for the application of nearest neighbor classifier are also presented to bring out the effectiveness of these textural features in terms of their ability to represent the different classes with very few training samples.
Noise filtering on echocardiographic records
Author(s):
Joergen Erik Assentoft;
Arne Andreasen;
Asbjorn M. Drewes;
B. O. Kristensen
Show Abstract
Ultrasound images of moving anatomic structures such as the human heart and other organs are distorted by noise, which may destroy important information. Because of the noise, the contours in the picture are difficult to recognize. Previously, various algorithms like `symmetrical exponential filtration' have been described as good noise-reducing tools. In this work, we used a modified median filtration carried out on triplets of frames from the video signal.
Real-time digital processing of video image sequences for videofluoroscopy
Author(s):
John A. Rowlands
Show Abstract
The conflicting demands of minimizing radiation dosage and constant replenishment of the image result in noisy videofluoroscopic image sequences. We are investigating the possibilities for real-time digital image processing to reduce the level of noise without increase in x ray exposure rate. Several approaches have been previously suggested based on combinations of spatial filtration, temporal filtration and grey scale remapping. It is demonstrated that motion adaptive temporal filtration combined with grey scale remapping has considerable advantages. Real-time temporal averaging is conventionally performed by means of dedicated hardware which includes adders, multipliers and, for non-linear processing, comparators. Typically, the algorithm has to be selected before the hardware can be built. Thus, in order to permit a more general experimental investigation a temporal filter using reprogrammable dual-input look-up tables was designed and built.
Validation of quantitative computed tomographic evaluation of bone mineral density of several CT scanners
Author(s):
Steven L. Fritz;
Charles D. Stockham
Show Abstract
We have validated a pre-existing model for QCT evaluation of bone mineral density by scanning a commercial bone mineral density phantom on several CT scanners and evaluating the accuracy and reproducibility of bone mineral density measurements on each. The model assumes that bone mineral density is a linear function of CT number of bone. Rather than imaging bone mineral density standards for calibration, we computed an `equivalent bone mineral density' for fat and muscle from the known linear relationship between bone mineral density and CT number to remove the dependence of bone mineral density on field non- uniformities caused by beam hardening and scattered radiation, positioning errors and quality control. The `equivalent bone mineral density' for fat and muscle were computed from spectral data and atomic composition of fat and tissue for a GE 9800 scanner. These were used to establish the true bone mineral density of two reference BMD standards used in the phantom and these in turn were used to measure the `equivalent bone mineral density' of fat and muscle on other CT scanners. Phantom measurements on several other CT scanners were used to compute the `equivalent bone mineral density' of the phantom inserts for those systems. Results from the Picker 1200, the Philips LX and the Siemens Somatom DR/H were compared with the results of the GE 9800.
Registration of high-resolution images of the retina
Author(s):
Artur V. Cideciyan;
Samuel G. Jacobson;
Colin M. Kemp;
Robert W. Knighton;
Joachim H. Nagel
Show Abstract
A method of image registration is presented for the case when the deformation between two images can be well approximated with a combination of translation, rotation, and global scaling. The method achieves very high accuracy by combining a global optimization in the 4- dimensional discrete parameter space with a local optimization in the 4-dimensional continuous parameter space. The 4-dimensional global optimization is accomplished with two 2- dimensional optimizations. The Fourier magnitude is used to decouple translation from rotation and scaling, and a log-polar mapping of the Fourier magnitude is used to convert rotation and scaling into shifts. Optimal rotation and scaling parameters are determined with a cross-correlation in the log-polar domain. After compensation for rotation and scaling differences, cross-correlation in the spatial domain yields the translation parameters. The four registration parameters are further refined with a local optimization using the correlation coefficient as a similarity measure in the 4-dimensional continuous parameter space. Results are shown from simulations and from registration of retinal images. For simulated images with a signal-to-noise ratio of -5 dB, the accuracy of the registration method is estimated to be better than 0.07 degrees, 0.1%, and 0.3 pixels for rotation, scaling, and translation, respectively. In the case of 512 X 512 pixel images the computation resource requirements are compatible with high end PCs, i.e., approximately 25 minutes on an Intel 80486/33 MHz based IBM/PC compatible.
Workstation capabilities on an MRI system
Author(s):
James D. Hale;
Andrew Li;
Ilya Simovsky
Show Abstract
In order to add more powerful image processing and graphics features to an MRI system, manufacturers usually contract a third party to connect the system to a graphics workstation. The MRI user must learn a new operating system, user interface, and graphics package; the workstation may not be able to generate the hardcopy (films); and it adds significantly to the cost. We have developed software to run on an MRI system so that users can enhance and manipulate image data without adding a workstation. We have tailored the package specifically for magnetic resonance images, maximizing quality of output and convenience of operation. The Interactive Image Processor (IIP) uses an adaptive filtering scheme that displays images with improved signal-to-noise without blurring edges. The IIP zooms images with a high resolution Fourier interpolation technique. It also can display an interpolated or extrapolated echo from any dual-echo data set. A second part of this software, the Advanced Performance Package (APP) gives the MRI operator the ability to manipulate images in three dimensions to create oblique views or images that conform to curved surfaces. Like the IIP, the APP uses Fourier interpolation to achieve the best possible image quality, giving the radiologist a way to get multiple views of a patient without having to run many different acquisitions.
Neural network image compression using Gabor primitives
Author(s):
Mary P. Anderson;
David G. Brown;
Alexander C. Schneider
Show Abstract
A back propagation neural network was used to compress simulated nuclear medicine liver images with and without simulated lesions. The network operated on the Gabor representation of the image, in order to take advantage of the apparent similarity between that representation and the natural image processing of the human visual system. The quality of the compression scheme was assessed objectively by comparing the original images to the compressed/reconstructed images through calculation of an index shown to track with human observers for this class of image, the Hotelling trace. Task performance was measured pre- and post-compression for the task of classifying normal versus abnormal livers. Compression of even 2:1 was found to result in significant performance degradation in comparison with other means of compression, but produced a visually pleasing image.
Effect of spatial frequency content of the background on visual detection of a known target
Author(s):
William M. Gentles;
Thanh Nguyen;
William K.B. Ho;
Curtis B. Caldwell;
Lisa E. Ehrlich;
Charlene Leonhardt;
Rick Reed
Show Abstract
It is known that the human visual system has varying sensitivity to different spatial frequencies. We are attempting to develop a better understanding of the interaction between the target and surround in a visual detection task, by changing the properties of the surround in frequency space. In our experiments, a known target is superimposed on a bandwidth- limited Gaussian noise background. The size, brightness, and position of the target are kept constant. The experimental design is a `Signal Known Exactly' ROC experiment. For each background the observer knows that there is a 50% probability that the target is present. The observer is asked to state a confidence level from 1 to 5 that a target is present in a given background. Detection performance for backgrounds with different frequency content is compared using the area under the ROC curve. The results of these experiments indicate that performance varies markedly as the frequency content of the background is changed. Observer performance dropped to a minimum when the background frequency was close to the frequency of maximum contrast sensitivity of the human visual system.
Task performance on constrained reconstructions: human observer performance compared with suboptimal Bayesian performance
Author(s):
Robert F. Wagner;
Kyle J. Myers;
Kenneth M. Hanson
Show Abstract
We have previously described how imaging systems and image reconstruction algorithms can be evaluated on the basis of how well binary-discrimination tasks can be performed by a machine algorithm that `views' the reconstructions. Algorithms used in these investigations have been based on approximations to the ideal observer of Bayesian statistical decision theory. The present work examines the performance of an extended family of such algorithmic observers viewing tomographic images reconstructed from a small number of views using the Cambridge Maximum Entropy software, MEMSYS 3. We investigate the effects on the performance of these observers due to varying the parameter (alpha) ; this parameter controls the stopping point of the iterative reconstruction technique and effectively determines the smoothness of the reconstruction. For the detection task considered here, performance is maximum at the lowest values of (alpha) studied; these values are encountered as one moves toward the limit of maximum likelihood estimation while maintaining the positivity constraint intrinsic to entropic priors. A breakdown in the validity of a Gaussian approximation used by one of the machine algorithms (the posterior probability) was observed in this region. Measurements on human observers performing the same task show that they perform comparably to the best machine observers in the region of highest machine scores, i.e., smallest values of (alpha) . For increasing values of (alpha) , both human and machine observer performance degrade. The falloff in human performance is more rapid than that of the machine observer at the largest values of (alpha) (lowest performance) studied. This behavior is common to all such studies of the so-called psychometric function.
Effect of removing image pixel noise bits on the detection of simulated lung nodules
Author(s):
Keh-Shih Chuang;
A. Sankaran
Show Abstract
Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits will not affect the image quality. In this paper we report our further study and demonstrate quantitatively that the removal of noise bits has no effect on the image property. The detectability of simulated lung nodules on a wedge phantom is used as a basis for this study. The test phantom consists of a Lucite step wedge of 22 steps (12.5 mm/step) with nylon spheres (1/4 inch, 3/8 inch, and 1/2 inch diameter) simulating nodules kept at each step. Test images of the phantom were taken on a scanning equalization radiographic system which has better nodule detectability than conventional diagnostic x ray systems. The gray-levels at each step of the wedge on the nodule with respect to the background were measured and plotted against the thickness of the wedge. Preliminary results show that the removal of the noise bits do not affect the shape of the curve and thus the detectability of the nodule.
Phase-modulated isodensity pseudocolor code of medical image
Author(s):
Mingjun You
Show Abstract
The phase-modulated iso-density pseudocolor code of x-ray black and white medical images has been made by Ronchi grating. As a result, the intensified pseudocolor with highlight details are obtained. The theorem, procedure, and the results of that experiment are also given. Furthermore, the problems of pseudocolored experiment condition, color complement, and color repetition are discussed.
Detectability of pulmonary nodules in linearly and logarithmically amplified digital images of the chest
Author(s):
Dinko Plenkovich
Show Abstract
The purpose of this study was to compare detectability of pulmonary nodules in linearly and logarithmically amplified digital images of the chest. One hundred and sixty digital x-ray images of a frozen, unembalmed, human chest phantom with simulated pulmonary nodules were acquired using a 40 cm diameter image intensifier-television camera system. Signal from the video camera was digitized with a frame grabber using MicroVAX 3400 as the host computer. Each of these 160 images was processed using both linear and logarithmic amplification, resulting in 320 digital images of the chest. A free-response receiver operating characteristic (FROC) study was performed in which an experienced radiologist was asked to locate multiple simulated nodules on all 320 digital images and to record one of three levels of confidence for each assumed nodule. For each criterion, the total number of correct responses was divided by the total number of nodules to obtain the ordinate of the point. The total number of false-positive answers generated was divided by the number of images to obtain the abscissa of the point. Examination of FROC curves demonstrated that significantly more mediastinal nodules were identified in logarithmically amplified images.
Densitometric measurement of blood flow application to stenosis quantification
Author(s):
Rozenn Le Goff;
Yves J. Bizais
Show Abstract
A densitometric model was developed to estimate absolute blood flows in vessels from a DSA sequence. It is derived from the image intensity to contrast agent (CA) relationship and from the mass conservation law. We showed that the flow rate through a vascular cross section is determined from time summation (Phi) of densitometric areas within a single ROI. It also depends on the mass and the attenuation coefficient (mu) of CA and on acquisition conditions. After estimating the apparent value of (mu) , experiments with vessel phantoms were performed on DSA systems to validate this model. The effect of the distance between the injection site and the region of measurement, the magnification factor, the tubing cross-section area, the injected mass of iodine, and the flow rate of injected CA was tested and analyzed. The accuracy and the reproducibility of water flow rate measurements by this method were estimated and the deviations explained. Finally, we show how such experiments can be used to quantify a stenosis from a whole DSA image sequence. Area narrowing is equal to the ratio of the integrate terms (Phi) for reference and stenotic segments. Relative flows at a vessel bifurcation can also be estimated by applying the model to each segment.
Improvement of detection in computed radiography by new single-exposure dual-energy subtraction
Author(s):
Wataru Itoh;
Kazuo Shimura;
Nobuyoshi Nakajima;
Masamitsu Ishida;
Hisatoyo Kato
Show Abstract
It is reported that the use of the dual-energy subtraction method enhances the abnormal shadow detection capability. However, as the subtracted image is significantly inferior to the original in signal-to-noise ratio (SNR), the x ray dosage normally used for chest x rays has not yielded subtracted images with adequate SNRs. Under these circumstances, we have concentrated on the fact that there is a correlation between the noise contents of bone and soft- tissue subtracted images although there is no correlation between the signal contents of these images. We now propose an algorithm that improves SNRs of subtraction images by reducing the noise only.
Reformatting PET images by direct fitting of the proportional grid system: implementation and validation
Author(s):
Yaorong Ge;
John R. Votaw;
Richard A. Margolin;
J. Michael Fitzpatrick;
Robert J. Maciunas;
Robert M. Kessler
Show Abstract
An important application of positron emission tomography (PET) is the correlation of patterns of regional brain metabolism with functional and behavioral abnormalities in various deceases. This requires examination of a large number of scans from many subjects, an activity which is facilitated by using a standard coordinate system for image representation. Talairach's proportional grid system is a popular and suitable system for this purpose. It relies on correct localization of the mid-sagittal plane and the line passing through the anterior commissure (AC) and posterior commissure (PC) in the image volume. These structures, however, have not been readily identifiable in PET images. With the increasing resolution of PET scanners, though, it may now be possible to establish the position of the AC-PC line by directly fitting it from anatomical landmarks which are recognizable in PET images and have definite relationships to the AC and PC. This approach is appealing for both practical and technical reasons. In this paper we present an improved method for direct fitting of the AC-PC line and mid-sagittal plane. We evaluate the quality of the approximation in terms of its precision and accuracy, and also assess accuracy by direct comparison with independently registered magnetic resonance (MR) images.
Recent studies of transform image enhancement
Author(s):
Sabzali Aghagolzadeh;
Okan K. Ersoy
Show Abstract
Blockwise transform image enhancement techniques are discussed. It is shown that the best transforms for transform image coding, namely, the scrambled real discrete Fourier transform, the discrete cosine transform, and the discrete cosine-III transform, are also the best for image enhancement. Three techniques of enhancement discussed in detail are alpha- rooting, modified unsharp masking, and filtering motivated by the human visual system response (HVS). With proper modifications, it is observed that unsharp masking and HVS- motivated filtering without nonlinearities are basically equivalent. Block effects are completely removed by using an overlap-save technique in addition to the best transform.
Adaptive smoothing of MR images by fitting planes
Author(s):
Prakash Adiseshan;
Tracy L. Faber;
Roderick W. McColl;
Ronald M. Peshock M.D.
Show Abstract
We present a solution method for adaptively smoothing magnetic resonance (MR) images while preserving discontinuities. We assume that the spatial behavior of MR data can be captured by a first order polynomial defined at every pixel. The formulation itself is similar to Leclerc's work on piecewise-smooth image segmentation, but we use the graduated non- convexity (GNC) algorithm as an optimizing tool for obtaining the solution. This requires initial values for polynomial coefficients of order greater than zero. These values are obtained by using ideas similar to that found in robust statistics. This initial step is also useful in determining the variance of the noise present in the input image. The variance is related to an important parameter (alpha) required by the GNC algorithm. Firstly, this replaces the heuristic nature of (alpha) with a quantity that can be estimated. Secondly, it is useful especially in situations where the variance of the noise is not uniform across the image. We present results on synthetic and MR images. Though the results of this paper are given using first order polynomials, the formulation can handle higher order polynomials.
Digital image enhancement for display of bone radiographs
Author(s):
Kelly Rehm;
Michael J. Pitt;
Theron W. Ovitt;
William J. Dallas
Show Abstract
We present a solution method for adaptively smoothing magnetic resonance (MR) images while preserving discontinuities. We assume that the spatial behavior of MR data can be captured by a first order polynomial defined at every pixel. The formulation itself is similar to Leclerc's work on piecewise-smooth image segmentation, but we use the graduated non- convexity (GNC) algorithm as an optimizing tool for obtaining the solution. This requires initial values for polynomial coefficients of order greater than zed all three images displayed on laser-printed film. Two radiologists made subjective quality judgments of each film individually and then ranked the trio in terms of quality. The results indicated that the observers preferred ASAHE-processed low-resolution films to both high- and low-resolution unprocessed films.
Automatic gray-scale transformation in the Konica direct digitizer system
Author(s):
Sumiya Nagatsuka;
Akiko Kano;
Hisanori Tsuchino;
Hideyuki Handa
Show Abstract
In x-ray radiographs for use in the medical field, it is desired that the density in the region of interest (ROI, area containing important image consistent for diagnosis) on the x-ray radiograph image is stable and its gradation is adjusted so that the structure of human body and the shade and shadow of the lesion can be observed easily. In developing the Konica Direct Digitizer, which is a digital radiographic image input apparatus to be used exclusively for chest and abdomen images, we have developed a new automatic gray scale transformation algorithm capable of providing most suitable images for diagnosis. The remarkable characteristic of the automatic gray scale transformation algorithm developed this time is to identify automatically the position of ROI in the image and to determine the gray scale transformation conditions based on the image data in the ROI. We investigated 50 clinical images radiographed by Konica Direct Digitizer from the viewpoint of the accuracy of the position of ROI established by the automatic gray scale transformation and of the image finish conditions accomplished by the automatic gray scale transformation. As a result, we have learned that both of them were stable and sufficiently accurate. From now on, we further improve its accuracy by increasing clinical images and applying this system to various regions.
Image processing on the image with pixel noise bits removed
Author(s):
Keh-Shih Chuang;
Christine Wu
Show Abstract
Our previous studies used statistical methods to assess the noise level in digital images of various radiological modalities. We separated the pixel data into signal bits and noise bits and demonstrated visually that the removal of the noise bits does not affect the image quality. In this paper we apply image enhancement techniques on noise-bits-removed images and demonstrate that the removal of noise bits has no effect on the image property. The image processing techniques used are gray-level look up table transformation, Sobel edge detector, and 3-D surface display. Preliminary results show no noticeable difference between original image and noise bits removed image using look up table operation and Sobel edge enhancement. There is a slight enhancement of the slicing artifact in the 3-D surface display of the noise bits removed image.
Restoration of noise aliasing and moire distortion in CCD-based diagnostic x-ray imaging
Author(s):
Cornelis H. Slump
Show Abstract
Sampling analog signals cause aliasing interference if the signal has frequency components higher than the folding frequency, i.e., half the sampling frequency. This distortion originates in the folding of these higher frequency components into the lower signal frequency spectrum with interference as a result. Usually aliasing artifacts are avoided by analog low-pass filtering of the signal prior to digitization. However, in the area of digitizing video signals from a CCD-based sensor such anti-alias filter is not feasible. The problem grows in importance due to increasing resolution requirements in many imaging applications pushing for CCD technology. This contribution reports the ongoing research to minimize the effects of two alias-based distortions, i.e., noise and moire patterns. In fluoroscopy, the amount of x-ray photons contributing to the image is restricted because of dose regulations. Quantum noise is clearly present in the images. The impinging white x-ray photons are spectrally shaped by the MTF of the imaging system. The resulting spectrum extends beyond the spatial Nyquist frequency of the CCD sensor. Aliased noise structures obscure diagnostic detail and, especially in real-time sequences, are annoying to look at. Another alias-based distortion is due to the anti-scatter grid, which is applied in order to reduce the number of scattered x-ray photons contributing to the image. Scattered photons give rise to a low-frequency blur of the images. An anti-scatter grid consists of a large number of parallel lead stripes separated by x- ray opaque material which are focussed at the x ray point source. The grid period is in the same order of magnitude as the CCD pixel size which causes moire pattern distortion in the images. In this contribution we discuss the restoration of both distortions. Aliased noise is minimized following a Wiener-type filtering approach. The moire pattern is attacked by inverse filtering. The analysis and simulations are presented, applications on medical images are shown.
Hemodynamic parameters estimation from cineangiograms using optical flow
Author(s):
Rosaire Mongrain;
Michel J. Bertrand;
Jean Meunier;
R. Camarero;
M. G. Bourassa
Show Abstract
In this paper, physical constraints are used to adapt known optical flow algorithms to study blood circulation in healthy and stenosed arteries by tracking dye dispersion using radiological image sequences. To this end, the penalty functional method of constrained optimization is put forward. Simulated radiological images taking into account blood flow and contrast medium dispersion are computed to test the proposed algorithms. Results from these simulated radiological images are presented and discussed.
Application of weighted-majority minimum-range filters in the detection and sizing of tumors in mammograms
Author(s):
Lucille Amy Glatt;
Harold G. Longbotham;
Thomas L. Arnow;
Daniel Shelton;
Peter Ravdin
Show Abstract
In image processing the solution is often unique to the problem. To be more specific, the importance of the filter window and sampling pattern chosen to filter, pass, or enhance a specific shape is very specific to the problem at hand. We detect suspect tumors in mammograms using a weighted majority minimum range filter and different sampling patterns and windows as a demonstration of this fact. Several methods have been developed to automate the process of detecting tumors in mammograms. We show that traditional windowing or sampling methods may be replaced by a hexagonal method that more accurately reflects the geometry of the problem and could improve the techniques already in existence. Several theorems involving a hexagonal filter window are presented, followed by the results of our application to mammograms.
Improvement of medical images using Bayesian processing
Author(s):
Chin-Tu Chen;
Xiaolong Ouyang;
Wing H. Wong;
Xiaoping Hu
Show Abstract
We have developed a Bayesian method for image processing that uses the Gibbs random field model to incorporate a priori information for the purpose of improving the image quality. The types of prior information incorporated include the property of local continuity (i.e., neighboring pixels within a homogeneous region are similar), the limited spatial resolution of the imaging system, and possibly, some prior knowledge derived from corresponding images acquired by other modalities. We use the concept of `line sites' to separate regions that exhibit distinctly different tissue characteristics. A smoothing scheme is applied to each homogeneous region using a Gibbs distribution function. An efficient computational technique called iterative conditional average (ICA) method, which calculates the conditional mean values for each pixel and line site iteratively until convergence, is employed to compute the point estimates of the images. We have used this Bayesian approach to process images in nuclear medicine, digital radiography, and magnetic resonance imaging (MRI). In the processed images, we observed improvements in the spatial resolution, image contrast, and reduction in noise level.
Determination of left ventricular ejection fraction in technetium-99m-methoxy isobutyl isonitrile radionuclide angiocardiography
Author(s):
Malcolm H. Davis;
Bahman Rezaie;
Frederick L. Weiland M.D.
Show Abstract
Abnormal left ventricular function is a diagnostic indication of cardiac disease. Left ventricular function is most commonly quantified by ejection fraction measurements. This paper presents a novel approach for the measurement of left ventricular ejection fraction (L VEF) using the recently introduced myocardial imaging agent, technetium-99m methoxy isobutyl isonitrile (99mTc-sestamibi). The approach utilizes computer image processing techniques to determine L VEF in equilibrium 99mTc-sestamibi multiple gated radionuclide angiography (RNA). Equilibrium RNA is preferred to first-pass RNA techniques due to the higher signal-to-noise ratio of equilibrium RNA resulting from longer image acquisition times. Data from 23 patients, symptomatic of cardiac disease, indicate that L VEFs determined using this radionuclide technique correlate well with contrast x-ray single plane cineangiography (r equals 0.83, p < 0.0000003).
Hepatic blood vessel recognition using anatomical knowledge
Author(s):
Noriko Inaoka;
Hideo Suzuki;
Morimichi Fukuda
Show Abstract
This paper describes a method for segmentation and recognition of hepatic blood vessels from axial MR image sequences. We propose a method of accurate segmentation of blood vessel components, and recognition of the blood vessel structure by utilizing two dimensional (2-D) and three dimensional (3-D) anatomical information. The method consists of two parts: (1) extraction of blood vessel components and other anatomical structures, and (2) recognition of 3-D blood vessel structure using anatomical models. The system first extracts candidates of hepatic blood vessel segments from each 2-D image automatically using the directional contrast filter and other image processing techniques. The contour of the liver is extracted semi-automatically. By using the knowledge about segmental anatomy and characteristics of the way blood vessels extend, the system searches for points connecting the segments in different slices and recognizes hepatic vascular system (portal veins and hepatic veins). The knowledge is implemented in a 2-D shape model and a tree model.
Computer detection of soft tissue masses in digital mammograms: automatic region segmentation
Author(s):
David H. Davies;
David R. Dance
Show Abstract
The automatic segmentation of pairs of mammograms has been investigated using texture features and a clustering algorithm. One hundred and twenty clinical mammograms (60 pairs) were digitized. These were divided into training and test sets of 40 and 80 films respectively. The test set was further divided into radiographically very dense breasts and breasts which were predominantly fatty. The results for the test set show that the segmentation algorithm is successful in segmenting accurately 23/30 (77%) of the pairs of fatty breasts into equivalent regions, but is unsuccessful in segmenting the very dense breasts.
Computerized bone analysis of hand radiographs
Author(s):
Ewa Pietka;
Michael F. McNitt-Gray;
Theodore R. Hall M.D.;
H. K. Huang
Show Abstract
A computerized approach to the problem of skeletal maturity is presented. The analysis of a computed radiography (CR) hand image results in obtaining features, that can be used to assess the skeletal age of pediatric patients. It is performed on a standard left hand radiograph. First, epiphyseal regions of interest (EROI) are located. Then, within each EROI the distals, middles, and proximals are separated. This serves as a basis to locate the extremities of epiphyses and metaphyses. Next, the diameters of epiphyses and metaphyses are calculated. Finally, an epiphyseal diameter and metaphyseal diameter ratio is calculated. A pilot study indicated that these features are sensitive to the changes of the anatomical structure of a growing hand and can be used in the skeletal age assessment.
Automatic detection of boundaries of brain tumor
Author(s):
Yi Lu;
Lucia J. Zamorano;
Federico Moure;
Steven G. Schlosser
Show Abstract
An important computational step in computer-aided neurosurgery is the extraction of boundaries of lesions in a series of images. Currently in many clinical applications, the boundaries of lesions are traced manually. Manual methods are not only tedious but also subjective, leading to substantial inter- and intraobserver variability. Furthermore, recent studies show that human observation of a lesion is not sufficient to guarantee accurate localization. With clinical images, possible confusion between lesions and coexisting normal structures (like blood vessels) is a serious constraint on an observer's performance. Automatic detection of lesions is a non-trivial problem. Typically the boundaries of lesions in CT images are of single-pixel width, and the gradient at the lesion boundary varies considerably. As many studies show, these characteristics of lesions within CT images, in conjunction with the generally low signal-to-ratio of CT images, render simple boundary detection techniques ineffective. In this paper we characterize the brain lesions in CT images, and describe a knowledge-guided boundary detection algorithm. The algorithm is both data- and goal-driven.
Knowledge-based organ identification from CT images
Author(s):
Masaharu Kobashi;
Linda G. Shapiro
Show Abstract
Segmentation of CT images into the various component organs is difficult to perform automatically, because standard methods such as edge tracking, region growing, and simple thresholding do not work. Absolute thresholds are not powerful enough to extract organs, since the gray tones of an organ vary widely depending on the parts, the patient, the CT scanner used, and the setup of the scanner. Edge tracking often fails, because edges around organs are incomplete, and the vagueness of the CT images can mislead most conventional edge detection methods. The nonhomogeneity of organs rules out a region growing approach. Dosimetrists, who trace the boundaries of organs for radiation treatment planning, use their own prior experience with the images and the expected shapes of the organs on various slices to identify organs and their boundaries. The goal of our current work is to develop a knowledge-based recognition system that utilizes knowledge of anatomy and CT imaging. We have developed a system for analyzing CT images of the human abdomen. The system features the use of constraint-based dynamic thresholding, negative-shape constraints to rapidly rule out infeasible segmentation, and progressive landmarking that takes advantage of the different degrees of certainty of successful identification of each organ. The results of a series of initial tests on our training data of 100 images from five patients indicate that the knowledge-based approach is promising.
Medical image recognition based on Dempster-Shafer reasoning
Author(s):
Shiuh-Yung James Chen;
Wei-Chung Lin;
Chin-Tu Chen
Show Abstract
In this paper, we present the basic components of the prototype of an expert system that is capable of recognizing major brain structures given a set of integrated brain images. The proposed medical image understanding system, which is based on the blackboard architecture, employs the Dempster-Shafer (D-S) model as its inference engine to mimic the reasoning process of a human expert in the task of dividing a set of spatially correlated x ray CT, proton density (PD), and T2-weighted MR images into semantically meaningful entities and identifying these entities as respective brain structures. Within the framework of D-S reasoning, belief interval is adopted to represent the strengths of evidence and the likelihoods of hypotheses. By using the complicated blackboard-based architecture and D-S model, the proposed system can perform the task of recognition efficiently. Several experimental results are given to illustrate the performance of the proposed system.
Development of a computer-aided detection system for lung cancer diagnosis
Author(s):
Hideo Suzuki;
Noriko Inaoka;
Hirotsugu Takabatake;
Masaki Mori;
Soichi Sasaoka;
Hiroshi Natori;
Akira Suzuki
Show Abstract
This paper describes a modified system for automatic detection of lung nodules by means of chest x ray image processing techniques. The objective of the system is to help radiologists to improve their accuracy in cancer detection. It is known from retrospective studies of chest x- ray images that radiologists fail to detect about 30 percent of lung cancer cases. A computerized method for detecting lung nodules would be very useful for decreasing the proportion of such oversights. Our proposed system consists of five sub-systems, for image input, lung region determination, nodule detection, rule-based false-positive elimination, and statistical false-positive elimination. In an experiment with the modified system, using 30 lung cancer cases and 78 normal control cases, we obtained figures of 73.3 percent and 89.7 percent for the sensitivity and specificity of the system, respectively. The system has been developed to run on the IBM* PS/55* and IBM RISC System/6000* (RS/6000), and we give the processing time for each platform.
Study of the cellular sociology through quantitative microscopy and topographical analysis
Author(s):
Christophe Dussert;
Jacqueline Palmari;
Monique Rasigni;
Francis Kopp;
Yolande Berthois;
Xue-Fen Dong;
Daniel Isnardon;
Georges Rasigni;
Pierre-Marie Martin
Show Abstract
We have developed a methodology to quantitatively study tumor cell heterogeneity from a topographical point of view through the concept of a minimal spanning tree graph. This concept is applied to the quantitation of the degree of order that may exist in a cell population, and by combining biological and mathematical approaches, to the analysis of dynamic and metabolic interactions responsible for this topographical organization. The method is used to analyze the cell cycle phases in tumor cell lines: the cells are detected from an optical microscopy image of the preparation by using algorithms that preserve the cell topography. The cells appear to be differently, and non-randomly, spatially distributed depending on the cycle phase in which they fall. Those topographical behaviors allow us to deduce some unexpected proliferating characteristics of the cells and to compare them to a numerical model of the cell cycle in an interactive population, developed from the cellular automata theory. The method may as well be applied to the topographical analysis of the cells expressing hormone receptors (namely, oestrogenic ones). More generally it may be used to analyze and quantify the cellular sociology both in its normal (morphogenesis) and pathological (cancer, therapeutic responses, ...) aspects.
Computerized 3-D reconstruction of complicated anatomical structure
Author(s):
Arne Andreasen;
Asbjorn M. Drewes;
Joergen Erik Assentoft
Show Abstract
In the study of the rabbit hippocampal region, images of 430 serial sections were aligned by a `parameter-shift' algorithm. The resulting 3-D matrix represents a fixed and stained but `whole' rabbit brain. From this virtual object the slice procedure, displacement, and re- alignment could be computer simulated and the artifacts associated with these procedures estimated.
Surface topography of the optic nerve head from digital images
Author(s):
Sunanda Mitra;
M. Ramirez;
Jose Morales
Show Abstract
A novel algorithm for three-dimensional (3-D) surface representation of the optic nerve head from digitized stereo fundus images has been developed. The 3-D digital mapping of the optic nerve head is achieved by fusion of stereo depth map of a fundus image pair with a linearly stretched intensity image of the fundus. The depth map is obtained from the disparities of the features in the stereo fundus image pair, computed by a combination of cepstral analysis and a correlation-like scanning technique in the spatial domain. At present, the visualization of the optic nerve head cupping in glaucoma is clinically achieved, in most cases, by stereoscopic viewing of a fundus image pair of the suspected eye. The quantitative representation of the optic nerve head surface topography following this algorithm is not computationally intensive and should provide more useful and reproducible information than just qualitative stereoscopic viewing of the fundus.
Anatomical-functional image correlation problem: an interactive tool based on a hybrid method
Author(s):
Patrizia Pisani;
Riccardo Guzzardi;
C. R. Bellina;
O. Sorace
Show Abstract
The accurate localization of anatomical structures in functional images is a crucial point in positron emission tomography (PET) studies, due to the relatively poor spatial resolution in PET and to the strict dependence from the metabolic behavior of the used tracer. Until now, mainly two software approaches to the solution of the above problem have been used: the automatic match of PET images with a corresponding anatomical image, and the use of a computerized atlas of brain anatomical structures. The decision to adopt a `hybrid' method, allowing the users to rely on automatic image matching and, at the same time, also being able to intervene at any moment in the anatomic localization process, has led to the development of a user-friendly, interactive image processing tool, including a computer driven correlation process and a set of general-purpose image processing routines. The package, called CHIP (correlative hybrid image processing tool), has been implemented in C on a SUN 3/60 graphic workstation, with X11-R4 window system, using the XView toolkit (from OpenLook) to build the user-interface, and will soon be ported on a SUN SPARC-II workstation, with the aim of enhancing its performance. The correlation is implemented by extracting contours from the image obtained performing an anatomical scan (CT or MRI) of the patient, using a physical head-holder to match the PET slice, and transforming the contour image into a set of significant regions of interest (ROIs); these can undergo additional editing by the user to correct possible inaccuracies generated by the automated edge-finding process. CHIP is able to perform a lot of general-purpose image utilities, including: (1) Spatial filtering -- e.g., smoothing, edge crispening, median filtering, convolution with user-defined filter; and (2) Histogram package -- histogram drawing, automatic equalization, user-friendly manual histogram, rescaling and cutting. This package, together with the former, permits the user to enhance anatomical images. This is particularly important when using CT images that typically, especially in the brain cortex, show a contrast inadequate for detecting small cerebral structures.
New method for microscopic image enhancement in medicine
Author(s):
Xiang-Qi Wu;
Ying Chen
Show Abstract
The inspection of a sputum smear through the microscope is an important means to diagnosis of cytology. In other words, this implies that the smear examination is the key to quality control in medicine. Since a microscopic image possessed with a complicated background has poor contrast and the object pixels consist of a small part of the whole image, it is more difficult for us to separate the object from the random background. In this paper, we propose a method for contrast stretching and linear structure enhancement based on local statistical features. We use gray level value of edge-direction instead of the gray mean value of local region. Theoretical and experimental results indicate that this method is more efficient than the method suggested by Lee both in the aspects of rejecting the random noise and enhancing the contrast.
Expert consultation system for lung diseases (tumor)
Author(s):
Xiang-Qi Wu;
Ning Zhong;
Weifeng Cheng;
Hou-Jin Chen;
Lijin He;
Jiazhang Xu;
Min Li
Show Abstract
This paper presents a computerized expert consultation system for diagnosing those diseases which are common but difficult to discriminate. It is aimed at the lung diseases with globular pathologic change. The knowledge base of the system consists of two modules where over 300 rules are stored. It decomposes all knowledge used into thirteen planes in which hierarchical controlling strategies are designed to search the solutions quickly and completely. We use the OPS83 as the development tool. In addition, an image processing subsystem is constructed as a part of this system. We incorporate artificial intelligence into image analysis and adopt the combination of the image processing techniques and expert knowledge so as to increase the diagnostic reliability of lung diseases.
Algorithm to reduce clip artifacts in CT images
Author(s):
Heang K. Tuy
Show Abstract
CT images of cross-sections containing metallic implants, such as prosthetic devices or tooth fillings, often have severe artifacts. Such artifacts may hinder medical diagnosis. We present an algorithm to reduce these artifacts. Unlike most algorithms available in the literature, instead of avoiding the use of data values along rays going through clips, we are making use of those data values explicitly to correct artifacts. Moreover, this new algorithm can be considered as an improvement over our previous algorithm in the sense that the sharpness of the final processed image is preserved even if projection data are not available. In this algorithm, we utilize both forward-projection and convolution back-projection very extensively, taking full advantage of our present capability to perform these processes accurately and with high speed. A significant improvement in image quality has been observed in both phantom and clinical studies.
Automatic diagnosis of the lung mechanical function
Author(s):
Pedro de Blas;
Luis Janez;
Augusto Perez
Show Abstract
Traditionally, the test of the lung mechanical function (LMF) has been carried out with the patient's active collaboration. In the fifties, it was possible to have mechanical means available to determine the lung mechanical function, causing a great development of its knowledge. Lately, the use of isotopes in medicine, because of their availability in the clinic, took an active part in the development of this new medical specialty. Nowadays, due to the development of the technology, it is possible to obtain the LMF in a quick way, avoiding the patient's collaboration, just from one frontal view chest radiograph. The system uses variables which are different from the traditional ones as a result of two factors: (1) its theoretical basis (General Systems Theory), and (2) the technology used (CCD camera, PC/AT-386, image board, two monitors, magneto-optical disk, laser printer and video printer). The diagnosis system has been tested and validated in the Hospital Central de Asturias, Oviedo, Spain.
Statistical differentiation between malignant and benign prostate lesions from ultrasound images
Author(s):
Saganti B. Premkumar;
A. Glen Houston;
David E. Pitts;
Richard J. Babaian
Show Abstract
Digitized transaxial sequential ultrasound images of the prostate are analyzed statistically and a methodology to distinguish malignant from benign prostate lesions is being developed. For a given stepwise planimetrical scan of the prostate, classification criteria for identifying benign and malignant lesions on the basis of the statistical variability measures is presented. Results obtained from applying the statistical variability measure approach to both cancerous (various grades) and non-cancerous (pathologically confirmed) hypoechoic ultrasound lesions of the prostate is described. The accuracy of the present statistical based image analysis to predict pathology from ultrasound images is 87%.
Application of nonlinear filtering in mammograms
Author(s):
Wei Qian;
Maria Kallergi;
Laurence P. Clarke;
Kevin S. Woods;
Robert A. Clark M.D.
Show Abstract
A computer assisted method for the quantification and classification of mammographic parenchymal patterns (MPP) is proposed. Enhancement of mammographic images is performed using order statistic filtering, a superior method compared to median filtering techniques previously reported. Two complementary methods are proposed for quantification and classification of MPP, a local thresholding technique and an edge detection method, respectively. The latter method is based on non-linear filtering which uses order statistics or linear combination of order statistics filter specifically tailored to identify the boundaries and fine details of MPP. The edge detection method proved to be useful for the difficult differentiation of Wolfe's P2 and DY MPP that have similar breast density and common characteristics. The results suggest that the methods proposed are potentially useful for identification and quantitation of MPPs as required for mass screening of breast cancer.
Comparison of supervised pattern recognition techniques and unsupervised methods for MRI segmentation
Author(s):
Laurence P. Clarke;
Robert Paul Velthuizen;
Lawrence O. Hall;
James C. Bezdek;
Amine M. Bensaid;
Martin L. Silbiger
Show Abstract
The use of image intensity based segmentation techniques are proposed to improve MRI contrast and provide greater confidence levels in 3-D visualization of pathology. Pattern recognition methods are proposed using both supervised and unsupervised methods. This paper emphasizes the practical problems in the selection of training data sets for supervised methods that result in instability in segmentation. An unsupervised method, namely fuzzy c- means, that does not require training data sets and produces comparable results is proposed.
Semiautomatic classification of alveolar bone quality
Author(s):
Charles F. Hildebolt D.D.S.;
Michael W. Vannier M.D.;
Michael James Gravier;
M. Fineberg;
Ronald K. Walkup;
Robert H. Knapp;
Dominic J. Zerbolio Jr.;
Michael K. Shrout
Show Abstract
A semiautomated, radiograph-based classifier of alveolar bone quality for dry skulls was developed Bone quality was based on the assessment of surface features, such as the resorption of cortical bone and the presence of vertical defects. The consensus of two trained observers was used to rate 50 mandibularquadrants of 29 skulls as having normal or poor alveolar bone quality. Bitewing radiographs were taken of the mandibles and digitized with a 35-mm, solid-state slide scanner at 1024 x 1520 x 8 bits. Regions of interest (ROl) of alveolar bone between the mandibular first and second molars were chosen. For these ROTs, Gray-scale values were plotted as histograms. Nonzero portions of the histogram were mapped to a 100-cell scale and cumulative percentage frequency curves of these were calculated. Average cumulative frequency distributions were calculated for 14 cases with normal bone quality and 1 1 cases with poor bone quality. These distributions were used to develop an automatic classifier based on differences between the cumulative frequency curve for each case and the average cumulative frequency curves for normal and poor quality bone. The bone quality of 43 of the 50quadrantswas successfully determined with this classifier. Of the seven misses, two were from one skull with severely tilted teeth; three were associated with bleached museum specimens; and the remaining two appeared to be a failure of the classifier. These preliminary results are encouraging. This classifier will be applied to a longitudinal series of bitewings of patients to predict alveolar bone loss.
Computer-assisted diagnosis for lung nodule detection using a neural network technique
Author(s):
Shih-Chung Benedict Lo;
Matthew T. Freedman M.D.;
Jyh-Shyan Lin;
Brian Krasner;
Seong Ki Mun
Show Abstract
The potential advantages of using digital techniques instead of film-based radiology have been discussed very extensively for the past ten years. These advantages are found mainly in the computer management of picture archiving and communication systems (PACS). On the other hand, the computer-assisted diagnosis (CADx) could potentially enhance radiological services in the future. Lung nodule detection has been a clinically difficult subject for many years. Most of the literature has indicated that the finding rate for lung nodules (size range from 3 mm to 15 mm) is only about 65%, and 30% of the missing nodule can be found retrospectively. In the recent research, imaging processing techniques, such as thresholding and morphological analysis, have been employed to enhance the true-positive detection. However, these methods still produce many false-positive detections. We have used neural networks to distinguish true-positives from the suspected areas-of-interest which are generated from signal enhanced image. The initial results show that the trained neural networks program can increase true-positive detections and drastically reduce the number of false-positive detections. This program can perform three modes of lung nodule detection: (1) thresholding, (2) profile matching analysis, and (3) neural network. This program is fully automatic and has been implemented in a DEC 5000/200 workstation. The total processing time for all three methods is less than 35 seconds. We are planning to link this workstation to our PACS for further clinical evaluation. In this paper, we report our neural network and fast algorithms for various image processing techniques for the lung nodule detection and show the results of the initial studies.