Bayes and medical imaging: it's time to make priors a priority
Author(s):
David R. Haynor
Show Abstract
Bayesian approaches to the analysis of medical images have gained in popularity in the last two decades, in spite of their computational complexity, because they offer a consistent framework for dealing with problems such as model selection and the proper tradeoffs between measurements and prior expectations. A Bayesian approach to the analysis of medical images requires that one give thought to the specification of an image prior which reflects what we know about human anatomy and the broad spatial characteristics of the distribution of disease within the body. We present an approach, based on Markov random fields, to developing prior distributions for medical image analysis and show some preliminary results.
Segmentation of MRI brain scans into gray matter, white matter, and CSF
Author(s):
Tamas Sandor;
Hoo-Tee Ong;
Vladimir I. Valtchinov;
Marilyn Albert;
Ferenc A. Jolesz
Show Abstract
An algorithm is described that can separate gray matter, white matter and CSF in brain scans taken with 3DFFT T1- weighted gradient echo magnetic resonance imaging. Although the algorithm is fully automated, it requires brain contours as input that utilize user-defined features. The inter- and intra-operator errors stemming from the variability of the contour definition and affecting the segmentation were assessed by using coronal brain scans of 19 subjects. The inter-operator errors were (1.61 plus or minus 2.38)% (P equals 0.01) for gray matter, (0.31 plus or minus 2.06)% (P equals 0.53) for white matter and (0.28 plus or minus 3.84)% (P equals 0.76) for cerebrospinal fluid (CSF). the intra- operator error was (0.28 plus or minus 0.55)% (P greater than 0.04) for gray matter, (0.40 plus or minus 0.37)% (P equals 0.0002) for white matter and (0.26 plus or minus 1.31)% (P equals 0.39) for CSF.
Knowledge-based 3D segmentation of the brain in MR images for quantitative multiple sclerosis lesion tracking
Author(s):
Elizabeth Fisher;
Robert M. Cothren Jr.;
Jean A. Tkach;
Thomas J. Masaryk;
J. Fredrick Cornhill
Show Abstract
Brain segmentation in magnetic resonance (MR) images is an important step in quantitative analysis applications, including the characterization of multiple sclerosis (MS) lesions over time. Our approach is based on a priori knowledge of the intensity and three-dimensional (3D) spatial relationships of structures in MR images of the head. Optimal thresholding and connected-components analysis are used to generate a starting point for segmentation. A 3D radial search is then performed to locate probable locations of the intra-cranial cavity (ICC). Missing portions of the ICC surface are interpolated in order to exclude connected structures. Partial volume effects and inter-slice intensity variations in the image are accounted for automatically. Several studies were conducted to validate the segmentation. Accuracy was tested by calculating the segmented volume and comparing to known volumes of a standard MR phantom. Reliability was tested by comparing calculated volumes of individual segmentation results from multiple images of the same subject. The segmentation results were also compared to manual tracings. The average error in volume measurements for the phantom was 1.5% and the average coefficient of variation of brain volume measurements of the same subject was 1.2%. Since the new algorithm requires minimal user interaction, variability introduced by manual tracing and interactive threshold or region selection was eliminated. Overall, the new algorithm was shown to produce a more accurate and reliable brain segmentation than existing manual and semi-automated techniques.
Feature space analysis of MRI
Author(s):
Hamid Soltanian-Zadeh;
Joe P. Windham;
Donald J. Peck
Show Abstract
This paper presents development and performance evaluation of an MRI feature space method. The method is useful for: identification of tissue types; segmentation of tissues; and quantitative measurements on tissues, to obtain information that can be used in decision making (diagnosis, treatment planning, and evaluation of treatment). The steps of the work accomplished are as follows: (1) Four T2-weighted and two T1-weighted images (before and after injection of Gadolinium) were acquired for ten tumor patients. (2) Images were analyed by two image analysts according to the following algorithm. The intracranial brain tissues were segmented from the scalp and background. The additive noise was suppressed using a multi-dimensional non-linear edge- preserving filter which preserves partial volume information on average. Image nonuniformities were corrected using a modified lowpass filtering approach. The resulting images were used to generate and visualize an optimal feature space. Cluster centers were identified on the feature space. Then images were segmented into normal tissues and different zones of the tumor. (3) Biopsy samples were extracted from each patient and were subsequently analyzed by the pathology laboratory. (4) Image analysis results were compared to each other and to the biopsy results. Pre- and post-surgery feature spaces were also compared. The proposed algorithm made it possible to visualize the MRI feature space and to segment the image. In all cases, the operators were able to find clusters for normal and abnormal tissues. Also, clusters for different zones of the tumor were found. Based on the clusters marked for each zone, the method successfully segmented the image into normal tissues (white matter, gray matter, and CSF) and different zones of the lesion (tumor, cyst, edema, radiation necrosis, necrotic core, and infiltrated tumor). The results agreed with those obtained from the biopsy samples. Comparison of pre- to post-surgery and radiation feature spaces confirmed that the tumor was not present in the second study but radiation necrosis was generated as a result of radiation.
Segmentation of MR images using multiresolution wavelet representations
Author(s):
Binh Pham
Show Abstract
A wavelet-based multiscale scheme for segmenting MR images is presented, which aims to extract structures of different sizes by performing segmentation from coarse to fine scales. This scheme alleviates some common difficulties encountered by region-based segmentation to avoid over-segmentation as well as to prevent small regions from being missed. It also allows users more effective control over the segmentation process in order to extract features suitable for their own purposes.
Statistical modeling of oriented line patterns in mammograms
Author(s):
Tim C. Parr;
Christopher J. Taylor;
Susan M. Astley;
Caroline R. M. Boggis
Show Abstract
Malignant breast lesions in x-ray mammograms are often characterized by abnormal patterns of linear structures. Architectural distortions and stellate lesions are examples of patterns frequently presenting with an appearance of radiating linear structures. Attempts to automatically detect these abnormalities have generally concentrated on features of known importance, such as radiating linear structure concurrency, spread of focus and radial distance. We present an alternative statistically based representation that is both complete and uncommitted. Our representation places no emphasis on the features known to be important, yet clearly incorporates them. We present results for an experiment in which 92% of 9600 lesion/non-lesion pixels were classified correctly. Using a set of 150 high resolution digitized mammograms a lesion detection sensitivity of 80% was obtained at a specificity of 0.38 false positives per image.
Opacity detection and characterization in mammograms using bilateral comparison and local characteristics
Author(s):
Jean-Marc Dinten;
Guillaume Montemont;
Michel Darboux
Show Abstract
Detection of opacities in mammograms, and especially of spicularities, is an important point for an early detection of breast cancer. Because of the superimposition of complex structures in a mammogram, it is a very tricky task. In this paper we propose a detection scheme combining, on one hand, information provided by an analysis of each single mammogram, and on the other hand, information provided by a comparison between the right and left mammograms. At first the two mammograms are filtered and registered, the potential pathological sites are obtained on the basis of a distance criterion adapted to opacities detection between the two mammograms. Then a robust segmentation method delimits a region of interest (ROI) surrounding each potential pathological site. To limit the number of false positives and to provide the physicians with quantitative parameters, each detected region is characterized by a set of four parameters. This global approach has been evaluated on mammograms of the MIAS database, representative of different opacities shapes and different backgrounds. The results have shown that all sites identified as malignant have been detected with a low rate of false detections.
Digital measurement of gene expression in a cDNA microarray
Author(s):
Edward R. Dougherty;
Yidong Chen;
Sinan Batman;
Michael L. Bittner
Show Abstract
Gene expression can be quantitatively analyzed by hybridizing fluor-tagged mRNA to targets on a cDNA micro- array. Comparison of expression levels arising from co- hybridized samples is achieved by taking ratios of average expression levels for individual genes. The present paper concerns image processing to automatically segment digitized micro-arrays and measure median gene expression levees across cDNA target sites. The main difficulty arises from determination of the target site when signal intensity is low. Segmentation must be accomplished for target sites that can possess highly unstable geometry and consist of a relatively small number of pixels. Segmentation must also be computationally efficient. The present paper proposes a nonparametric statistical method that separates target site from local background using the Mann-Whitney test.
Collimation detection for digital radiography
Author(s):
Jiebo Luo;
Robert A. Senn
Show Abstract
In computerized radiography (CR) imaging, collimation is frequently employed to shield the body parts from unnecessary radiation exposure and minimize radiation scattering using x-ray opaque material. The radiation field is therefore the diagnostic region of interest which has been exposed directly to x-rays. We present an image analysis system for the recognition of the collimation, or equivalently, detection of the radiation field. The purpose is to (1) facilitate optimal tone scale enhancement, which can be driven only by the diagnostically useful part of the image data, and (2) minimize the viewing flare caused by the unexposed area. This system consists of three stages of operations: (1) pixel-level detection and classification of collimation boundary transition pixels; (2) line-level delineation of candidate collimation blades; and (3) region- level determination of the collimation configuration. This system has been reduced to practice and tested over 807 images of 11 exam type and a success rate in excess of 99% has been achieved for tone scale enhancement and masking. However, in general, these false negative cases have no significant impact on either tone scale or flare minimization because of the intrinsic nature of the algorithm. Due to the novel design of the system, its computational efficiency lends itself to on-line operations.
Computer-assisted diagnosis in CT angiography of abdominal aortic aneurysms
Author(s):
Martin Fiebich;
Myrosia M. Tomiak;
Roger M. Engelmann;
James McGill;
Kenneth R. Hoffmann
Show Abstract
The purpose of this study was to develop methods for automatic 3D-segmentation and automatic quantification of vascular structures in CT angiographic studies, e.g., abdominal aortic aneurysms. Methods for segmentation were developed based on thresholding, maximum gradient, and second derivative techniques. All parameters for the segmentation are generated automatically, i.e. no user interaction is necessary for this process. Median filtering of all images is initially performed to reduce the image noise. The algorithm then automatically identifies the starting point inside the aorta for the volume growing. The segmentation of the vascular tree is achieved in two steps. First, only the aorta and small parts of branch vessels are segmented by using strong restrictions in the parameters for threshold and gradient. A description of the aorta is generated by fitting the detected outer border of the aorta with an ellipse. This description includes centerline, direction, contour, eccentricity, and area. In the second step, segmentation parameters are changed automatically for segmentation of branch vessels. A shaded surface display of the segmented structures is then generated. The segmentation of the aorta appears accurate, is fast, and the 3D display can be manipulated in real time. The quantitative description of the aorta is reliable giving reproducible information. Total CPU time for the segmentation and description is less than five minutes on a standard workstation. Time-consuming manual segmentation and parameterization of vascular structures are obviated, with 3D visualization and quantitative results available in minutes instead of hours. This technique for segmentation and description of the aorta and renal arteries shows the feasibility of computer assisted diagnosis in CT angiographic studies without user interaction. Besides the description, a rapid 3D view of the vessels is generated, often needed by the physician and normally only achievable by time consuming manual segmentation.
Object-oriented approach to the automatic segmentation of bones from pediatric hand radiographs
Author(s):
Hyeonjoon Shim;
Brent J. Liu;
Ricky K. Taira;
Theodore R. Hall M.D.
Show Abstract
The purpose of this paper is to develop a robust and accurate method that automatically segments phalangeal and epiphyseal bones from digital pediatric hand radiographs exhibiting various stages of growth. The development of this system draws principles from object-oriented design, model- guided analysis, and feedback control. A system architecture called 'the object segmentation machine' was implemented incorporating these design philosophies. The system is aided by a knowledge base where all model contours and other information such as age, race, and sex, are stored. These models include object structure models, shape models, 1-D wrist profiles, and gray level histogram models. Shape analysis is performed first by using an arc-length orientation transform to break down a given contour into elementary segments and curves. Then an interpretation tree is used as an inference engine to map known model contour segments to data contour segments obtained from the transform. Spatial and anatomical relationships among contour segments work as constraints from shape model. These constraints aid in generating a list of candidate matches. The candidate match with the highest confidence is chosen to be the current intermediate result. Verification of intermediate results are perform by a feedback control loop.
Automated system for periodontal disease diagnosis
Author(s):
Salvador Estela Albalat;
Mariano Luis Alcaniz-Raya;
M. Carmen Juan;
Vincente Grau Colomer;
Carlos Monserrat
Show Abstract
Evolution of periodontal disease is one of the most important data for the clinicians in order to achieve correct planning and treatment. Clinical measure of the periodontal sulcus depth is the most important datum to know the exact state of periodontal disease. These measures must be done periodically study bone resorption evolution around teeth. Time factor of resorption indicates aggressiveness of periodontitis. Manual probes are commonly used with direct reading. Mechanical probes give automatic signal but this method uses complicated and heavy probes that are only limited for University researchers. Probe position must be the same to have right diagnosis. Digital image analysis of periodontal probing provides practical, accurate and easy tool. Gum and plaque index could also be digitally measured with this method.
Automatic clutter-free volume rendering for MR angiography using fuzzy connectedness
Author(s):
Jayaram K. Udupa;
Dewey Odhner;
Jie Tian;
George Holland;
Leon Axel
Show Abstract
In MR and CT angiography, clutter due to artifacts or nearby high-intensity structures, such as other uninteresting vessels and bone (in CT), prevents clear visualization of the vessel under investigation. In this paper, we offer an alternative to the manual editing that is commonly used to remove clutter. The method is automatic and requires the user to point at structures on a 3D MIP display. The method utilizes recently developed fuzzy connected object delineation algorithms to extract the vessels of interest. Since the resulting definition is nonbinary, it can be displayed via MIP or more sophisticated volume rendering.
Segmentation of magnetic resonance image using fractal dimension
Author(s):
Joseph K. K. Yau;
Sau-hoi Wong;
Kwok-Leung Chan
Show Abstract
In recent years, much research has been conducted in the three-dimensional visualization of medical image. This requires a good segmentation technique. Many early works use first-order and second-order statistics. First-order statistical parameters can be calculated quickly but their effectiveness is influenced by many factors such as illumination, contrast and random noise of the image. Second-order statistical parameters, such as spatial gray level co-occurrence matrices statistics, take longer time to compute but can extract the textural information. In this investigating, two different parameters, namely the entropy and the fractal dimension, are employed to perform segmentation of the magnetic resonance images of the head of a male cadaver. The entropy is calculated from the spatial gray level co-occurrence matrices. The fractal dimension is calculated by the reticular cell counting method. Several regions of the human head are chosen for analysis. They are the bone, gyrus and lobe. Results show that the parameters are able to segment different types of tissue. The entropy gives very good result but it requires very long computation time and large amount of memory. The performance of the fractal dimension is comparable with the entropy. It is simple to estimate and demands lesser memory space.
Image segmentation and tissue characterization in three-dimensional intravascular ultrasound images
Author(s):
Xiangmin Zhang;
Steven C. DeJong;
Charles R. McKay M.D.;
Milan Sonka
Show Abstract
In this paper, we report an automated approach to plaque tissue characterization in three-dimensional intravascular ultrasound images. Our previously reported automated method for coronary wall and plaque segmentation in intravascular ultrasound pullback sequences represent the first step of the method. Tissue characterization into two classes of soft and hard plaque is based on texture analysis and pattern recognition. Texture description features included gray- level-based measures, co-occurrence matrices, run length measures, and fractal-based measures. Performance of the method was assessed in cadaveric coronary arteries by comparison to the observer-defined plaque composition. Overall classification correctness of 90% was achieved.
Automatic segmentation of MR images using self-organizing feature mapping and neural networks
Author(s):
Javad Alirezaie;
M. Ed Jernigan;
Claude Nahmias
Show Abstract
In this paper we present an unsupervised clustering technique for multispectral segmentation of magnetic resonance (MR) images of the human brain. Our scheme utilizes the self-organizing feature map (SOFM) artificial neural network (ANN) for feature mapping and generates a set of codebook vectors for each tissue class. Features are selected from three image spectra: T1, T2 and proton density (PD) weighted images. An algorithm has been developed for isolating the cerebrum from the head scan prior to the segmentation. To classify the map, we extend the network by adding an associative layer. Three tissue types of the brain: white matter, gray matter and cerebral spinal fluid (CSF) are segmented accurately. Any unclassified tissues were remained as unknown tissue class.
Quantification of MR brain image sequence by adaptive structure probabilistic self-organizing mixture
Author(s):
Yue Joseph Wang;
Chi-Ming Lau;
Tulay Adali;
Matthew T. Freedman M.D.;
Seong Ki Mun
Show Abstract
This paper presents a neural network based technique for the quantification of MR brain image sequences. We studied image statistics to justify the correct use of the standard finite normal mixture model and formulated image quantification as a distribution learning problem. From information theory, we used relative entropy as the information distance measure and developed an adaptive structure probabilistic self- organizing mixture to estimate the parameter values. New learning scheme has the capability of achieving flexible classifier shapes in terms of winner-takes-in probability splits of data, allowing data to contribute simultaneously to multiple regions. The result is unbiased and holds the asymptotic properties of maximum likelihood estimation. To achieve a fully automatic function and incorporate the correlation between slices, we utilized a newly developed information theoretic criterion (minimum conditional bias/variance) to determine the suitable number of mixture components such that the network can adjust its structure to the characteristics of each image in the sequence. Compared with the results of the algorithms based on expectation- maximization, K-means, and Kohonen's self-organizing map, the new method yields a very efficient and accurate performance.
Biomechanical strength versus spinal trabecular bone strucure assessed using contact radiography and texture analysis
Author(s):
Xiaolong Ouyang;
John C. Lin;
Thomas M. Link;
Peter Augat;
Ying Lu;
David Newitt;
Thomas Lang;
Harry K. Genant;
Sharmila Majumdar
Show Abstract
Cubic specimens (N equals 26, 12 mm multiplied by 12 mm multiplied by 12 mm) of human vertebrae were cut along three orthogonal anatomical orientations, i.e. superior-inferior (SI), medial-lateral (MI) and anterior-posterior (AP). Contact radiographs of the bone cubes along these orientations were obtained and digitized using a laser scanner with a pixel size of 50 mm. A standardized digital image processing procedure was designed to assess trabecular bone structure. Global gray level thresholding and local thresholding algorithms were used to extract the trabecular bone network. Apparent trabecular bone fraction (ABV/TV), apparent trabecular thickness (I.Th), mean intercept separation (I.Sp), and number of nodes (N.Nd) were measured from the extracted trabecular network. Box counting based fractal dimension (Fr.D) of the trabecular bone pattern was also measured. Quantitative computed tomography (QCT) was then used to obtain bone mineral density (BMD). The specimens were further tested in compression along the same orthogonal orientations, and the corresponding Young's modulus (YM) were calculated. Paired t-test showed that the mean value of texture parameters (except ABV/TV) and YM along the SI orientation were significantly different (p less than 0.05) from those along the ML and AP orientations. In comparison, the mean values along ML and AP orientations were not significantly different. Correlation coefficients from linear regression and non-parametric correlation between YM, BMD and texture parameters showed a wide range, but they differed depending on the orthogonal plane of radiographic projection and the direction of biomechanical testing. In conclusion, trabecular texture parameters correlated significantly with BMD and YM. Trabecular texture parameters reflect the anisotropy or trabecular structure.
X2 isocontours: predictors of performance in nonlinear estimation tasks at low SNR
Author(s):
Stefan P. Mueller;
Frank J. Rybicki;
Craig K. Abbey;
Stephen C. Moore;
Marie Foley Kijewski
Show Abstract
Maximum-likelihood (ML) estimation is an established paradigm for the assessment of imaging system performance in nonlinear quantitation tasks. At high signal-to-noise ratio (SNR), maximum likelihood estimates are asymptotically normally distributed, unbiased, and efficient, thereby attaining the Cramer-Rao bound (CRB). Therefore, at high SNR the CRB is useful as a predictor of estimation performance. At low SNR, however, the achievable parameter variances are substantially larger than the CRB and the estimates are no longer Gaussian distributed. This implies that intervals derived from the CRB or other tighter symmetric variance bounds do not contain the appropriate fraction of the estimates expected from the normal distribution. We have derived the mathematical relationship between (chi) 2 and the expected probability density of the ML-estimates, and have justified the use of (chi) 2-isocontours to describe the estimates. We validates this approach by simulation of spherical objects imaged with a Gaussian PSF. The parameters, activity concentration and size, were estimated simultaneously by ML, and variances and covariances calculated over 1000 replications per condition. At low SNR, where the CRB is no longer achieved, (chi) 2-isocontours provide a robust predictor of the distribution of the ML- estimates. At high SNR, the (chi) 2-isocontours approach asymptotically the contour derived from the Fisher information matrix.
KL transformation of spatially invariant image sequences
Author(s):
James B. Farison;
Yong-gab Park;
Qun Yu;
Hong Lu
Show Abstract
This paper investigates the special properties and results involved in the application of the Karhunen-Loeve (KL) transformation, also called principal component analysis (PCA) or Hotelling transform, to linearly-additive, spatially-invariant (LA SI) image sequences such as arise in many medical imaging applications (SPECT temporal studies, multi-parameter MRI, etc.), multispectral remote sensing, and elsewhere. The special structure of LA SI image sequences provides some interesting results both for the KL analysis and for the resulting principal component images. Simulated images and mathematical results indicate the KL implications for the special structure of LA SI images, in relation to the statistical order and feature characteristics of the image sequence. Simulated image sequences are understood by the mathematical results and illustrate the characteristics of KL compression and reconstruction for LA SI images. The well-known and widely used KL transform is a general and powerful image compression technique based on the statistical variance of the image data. However, it does not explicitly acknowledge specific features or their individual characteristics in an image set. For LA SI images, this may be an important limitation in relation to other methods of analysis and compression for such images.
Hit-noise reduction in portal images: a comparison between wavelet- and rank-order-based methods
Author(s):
William J. Dallas;
Eugene J. Gross;
Hans Roehrig;
Thomas L. Vogelsong
Show Abstract
The system we have been investigating is a real-time portal imager which incorporates an x-ray-to-light converter and a CCD sensor. When the high-energy X-rays of the radiation therapy beam circumvent the converter and impinge directly on the CCD sensor they cause artifacts in the image that have the appearance of scattered bright points of light. It is as if salt has been shaken onto a photograph. In therapy imaging the artifact is termed 'direct hit' noise. The generic name impulse noise. The goal of this investigation has been to determine an efficient method for eliminating 'direct hit' artifacts produced on a CCD camera in a portal imaging system. Because the photon energies for portal imaging are so high, it is very difficult to prevent the hits. For that reason, we have investigated post-processing methods to remove the noise from captured images. Two families of filtering algorithms are applied in the images: A wavelet-based family and a rank-order based family.
Wavelet-based image denoising using generalized cross validation
Author(s):
Maarten Jansen;
Adhemar Bultheel
Show Abstract
De-noising algorithms based on wavelet thresholding replace small wavelet coefficients by zero and keep or shrink the coefficients with absolute value above the threshold. The optimal threshold minimizes the error of the result as compared to the unknown, exact data. To estimate this optimal threshold, we use generalized cross validation. This procedure does not require an estimation for the noise energy. Originally, this method assumes uncorrelated noise, and an orthogonal wavelet transform. In this paper we investigate the possibilities of this method for less restrictive conditions.
Optimal interpolation for medical images using MAP method
Author(s):
Cliff X. Wang;
Wesley E. Snyder;
Griff L. Bilbro;
Peter Santago II;
David M. Honea
Show Abstract
The problems of high resolution image reconstruction are approached in this project as an optimization problem. Assuming an ideal image is blurred, noise corrupted, and sub-sampled to produce the measured image, we pose the estimation of the enlarged image as a maximum-a-posteriori (MAP) restoration process and the mean field annealing optimization technique is used to solve the multi-model objective function. The iterative interpolation process incorporates two terms into its objective function. The first term is the 'noise' term which models the burring and subsampling of the acquisition system. By using the system point spread function and the noise characteristics, the measured pixels at the sub-sampled-grid are mapped into the grid of the original image. A second term, the a-priori term is formulated to fore the prior constraints such as noise smoothing and edge preserving into the interpolation process. The resulted image is a noise reduced, deblurred, and enlarged image. The proposed algorithm are used to zoom several medical images, along with existing techniques such as pixel replication, linear interpolation, and spectrum extrapolation. The resulted images indicate that the proposed algorithm can smooth noise extensively while keeping the image features. The images zoomed by other methods suffer from noise and look less favorable in comparison.
Segmentation of 3D objects using live wire
Author(s):
Alexandre Xavier Falcao;
Jayaram K. Udupa
Show Abstract
We have been developing user-steered image segmentation methods for situations which require considerable user assistance in object definition. In such situations, our segmentation methods aim (1) to provide effective control to the user on the segmentation process while it is being executed and (2) to minimize the total user's time required in the process. In the past, we have presented two paradigms, referred to as live wire and live lane, for segmenting 3D/4D object boundaries in a slice-by-slice fashion. In this paper, we introduce a 3D extension of the live wire approach which can further reduce the time spent by the user in the segmentation process. In 2D live wire, given a slice, for two specified points (pixel vertices) on the boundary of the object, the best boundary segment (as a set of oriented pixel edges) is the minimum-cost path between the two points. This segment is found via dynamic programming in real time as the user anchors the first point and moves the cursor to indicate the second point. A complete 2D boundary in this slice is identified as a set of consecutive boundary segments forming a 'closed,' 'connected,' 'oriented' contour. The strategy of the 3D extension is that, first, users specify contours via live- wiring on a few orthogonal slices. If these slices are selected strategically, then we have a sufficient number of points on the 3D boundary of the object to do live-wiring automatically on all axial slices of the 3D scene. Based on several validation studies involving segmentation of the bones of the foot in MR images, we found that the 3D extension of live wire is statistically significantly (p less than 0.0001) more repeatable and 2 - 6 times faster (p less than 0.01) than the 2D live wire method and 3 - 15 times faster than manual tracing.
Shape-based interactive three-dimensional medical image segmentation
Author(s):
Kevin P. Hinshaw;
James F. Brinkley M.D.
Show Abstract
Accurate image segmentation continues to be one of the biggest challenges in medical image analysis. Simple, low- level vision techniques have had limited success in this domain because of the visual complexity of medical images. This paper presents a 3-D shape model that uses prior knowledge of an object's structure to guide the search for its boundaries. The shape model has been incorporated into scanner, an interactive software package for image segmentation. We describe a graphical user interface that was developed for finding the surface of the brain and explain how the 3-D model assists with the segmentation process. Preliminary experiments show that with this shape- based approach, a low-resolution boundary for a surface can be found with two-thirds less work for the user than with a comparable manual method.
Digital radiography segmentation of a scoliotic vertebral body using deformable models
Author(s):
Claude Kauffmann;
Jacques A. de Guise
Show Abstract
A computer segmentation method based on the active contours model (gsnake) and using a priori knowledge is adapted and used to detect automatically the contour lines of the vertebral body in digital radiographs of the scoliotic spine. These contour lines are used to identify correspondent anatomical landmarks for the 3D reconstruction of the scoliotic spine using a bi-planar technique. Automated digitization of the landmarks should drastically reduce the time and the variability of the actual manual digitization method and improve precision in 3D reconstruction. Our procedure is applied to ten different radiographs of scoliotic spine and the results are exposed and discussed. A variability study is also performed and variations in vertebral landmark locations are evaluated. In comparison with the manual landmark identification method, we show that the automated procedure can be used in a supervised environment for the precise extraction of anatomical landmarks.
Segmentation of cortical surface and interior brain structures using active-surface/active-volume templates
Author(s):
David J. Schlesinger;
John W. Snell;
Lois E. Mansfield;
James R. Brookeman;
J. Hunter Downs III;
James M. Ortega;
Neal F. Kassell M.D.
Show Abstract
Advanced applications such as neurosurgical planning and simulation require both surface and interior anatomical information in order to be truly effective. We are developing a segmentation scheme based on collections of active surface templates embedded within an active volume. This composite system encodes high-level anatomical knowledge of both cortical surface and interior brain structures in a self-assembling model of a reference, or atlas brain. Following initialization of the surface templates in the test brain volume, the cortical surface templates deform to achieve a segmentation of the surface of the brain. The displacements of the cortical surface templates cause an increase in the potential elastic energy of the active volume, and a subsequent minimization of this elastic energy is used to define a volumetric warp between the reference brain and the test data. This warp is used to deform the active surface models of the deep structure in the brain to their approximate final configurations, after which a further energy minimization step achieves a final segmentation of the deep structures. The method uses the results of the surface segmentation step as a-priori information regarding the likely deformation of the deep surface models. Initial tests illustrate the potential of the system in regard to the segmentation of cortical surface and deep brain anatomy. Results are analyzed in terms of improvements that will increase the efficacy of the system.
Lymph node segmentation using active contours
Author(s):
David M. Honea;
Yaorong Ge;
Wesley E. Snyder;
Paul F. Hemler;
David J. Vining
Show Abstract
Node volume analysis is very important medically. An automatic method of segmenting the node in spiral CT x-ray images is needed to produce accurate, consistent, and efficient volume measurements. The method of active contours (snakes) is proposed here as good solution to the node segmentation problem. Optimum parameterization and search strategies for using a two-dimensional snake to find node cross-sections are described, and an energy normalization scheme which preserves important spatial variations in energy is introduced. Three-dimensional segmentation is achieved without additional operator interaction by propagating the 2D results to adjacent slices. The method gives promising segmentation results on both simulated and real node images.
Robust partial-volume tissue classification of cerebral MRI scans
Author(s):
Lucien Nocera;
James C. Gee
Show Abstract
In magnetic resonance images (MRI) a voxel may contain multiple tissue types (partial volume effect). We concentrate in the classification of these voxels using an adaptive Bayesian approach and pay particular attention to practical implementation problems induced by the modeling of partial volume voxels. Moreover, we show that this algorithm is suitable to perform tissue classification of brain MRI scans that in turn can be used for visualization or quantitative analysis, or for further purposes such as brain image registration. Results are presented showing the efficacy of the method as compared to a binary classification process.
Segmentation of the aorta using a temporal active contour model with regularization scheduling
Author(s):
Horace H. S. Ip;
Rudolf Hanka;
Lilian Hongying Tang
Show Abstract
This paper describes a method for automated assessment of the movement of the aorta from CT images using a new formulation of the active contour model to segment and track the aortic boundary and its movements through the temporal image sequence. The active contour model is capable of exploiting prior knowledge and posterior information of the image content, e.g. the anatomical shape of the aorta and the image gradient intensity of the aortic boundaries as well as physical constraint such as continuity of motion along the time dimension.
Automatic heart localization from a 4D MRI data set
Author(s):
Wolfgang Sorgel;
Vincent Vaerman
Show Abstract
The purpose of the presented work is the automatic localization of the heart from 4D multi-slice magnetic resonance images (MRI). Well known active contour extraction techniques such as 'snakes' or 'balloons' require precise initialization which is mostly done interactively by the user in existing systems. A new method for the automatic initialization of such models is presented here for application on 4D MRI dataset acquired from the human heart. The method consists of two main steps: a global localization of the heart and a coarse initialization of the contours. Furthermore, it is shown how this initialization can be used for an automatic fine segmentation by an active contour model. The temporal analysis of the heart beat cycle is well suited for localization purposes. A 'temporal variance image' is thus first computed at each spatial slice location. These variance images consistently highlight the heart due to its wall movement and the heavy blood flow. By thresholding the variance images, projecting them into a single image, thresholding again and selecting the largest resulting object, a 'binary confidence mask' is computed for the heart region. This mask allow us to extract one binary image of the heart for each spatial slice location, regardless of temporal location. In the initialization stage, an 'initial contour' is matched to each of these masked images by affine transform, adapting size, location, aspect ratio and orientation. Initial contours may be acquired from a predefined model. In absence of such a model, ellipses were successfully applied as generic initial contours. For this stage, 2D contours were used; however, extensions to 3D are straightforward. The affine adapted contours are then considered as initialization for a multi- step active contour model for the accurate extraction of the heart walls: the contours are deformed according to the masked binary images, further refined on the temporal mean images for each spatial slice location, and finally the outer heart walls are tracked over time on the actual images at each spatial slice location. This approach, making use of existing active contour models yields an efficient and robust method for an exact extraction of the heart contours.
3D tomographic reconstruction using geometrical models
Author(s):
Xavier L. Battle;
Gregory S. Cunningham;
Kenneth M. Hanson
Show Abstract
We address the issue of reconstructing an object of constant interior density in the context of 3D tomography where there is prior knowledge about the unknown shape. We explore the direct estimation of the parameters of a chosen geometrical model from a set of radiographic measurements, rather than performing operations (segmentation for example) on a reconstructed volume. The inverse problem is posed in the Bayesian framework. A triangulated surface describes the unknown shape and the reconstruction is computed with a maximum a posteriori (MAP) estimate. The adjoint differentiation technique computes the derivatives needed for the optimization of the model parameters. We demonstrate the usefulness of the approach and emphasize the techniques of designing forward and adjoint codes. We use the system response of the University of Arizona Fast SPECT imager to illustrate this method by reconstructing the shape of a heart phantom.
Dynamic reconstruction of 3D coronary arterial trees based on a sequence of biplane angiograms
Author(s):
Shiuh-Yung James Chen;
John D. Carroll
Show Abstract
To facilitate the evaluation of coronary artery and regional cardiac wall motion in three dimensions (3D), a method has been developed for 3D reconstruction of coronary arterial tree from two sequences of routine angiograms throughout the cardiac cycle acquired at arbitrary angles without using calibration objects. The proposed method consists of four major steps: (1) segmentation of vessel centerlines and feature extraction including bifurcation points, vessel diameters, vessel directional vectors, and vessel hierarchy in the two sequences of coronary angiograms, (2) determination of the optimal transforation in terms of a rotation matrix R and a translation vector t for every pair of angiograms acquired at two different views based on the identified bifurcation points and vessel directional vectors, (3) calculation of 3D coronary arterial trees throughout the cardiac cycle based on the calculated transformations, identified vessel centerlines, and diameters, and (4) motion analysis and dynamic rendering of the reconstructed 3D coronary arterial trees throughout the cardiac cycle.
Model-based image reconstruction from time-resolved diffusion data
Author(s):
Suhail S. Saquib;
Kenneth M. Hanson;
Gregory S. Cunningham
Show Abstract
This paper addresses the issue of reconstructing the unknown field of absorption and scattering coefficients from time- resolved measurements of diffused light in a computationally efficient manner. The intended application is optical tomography, which has generated considerable interest in recent times. The inverse problem is posed in the Bayesian framework. The maximum a posteriori (MAP) estimate is used to compute the reconstruction. We use an edge-preserving generalized Gaussian Markov random field to model the unknown image. The diffusion model used for the measurements is solved forward in time using a finite-difference approach known as the alternating-directions implicit method. This method requires the inversion of a tridiagonal matrix at each time step and is therefore of O(N) complexity, where N is the dimensionality of the image. Adjoint differentiation is used to compute the sensitivity of the measurements with respect to the unknown image. The novelty of our method lies in the computation of the sensitivity since we can achieve it in O(N) time as opposed to O(N2) time required by the perturbation approach. We present results using simulated data to show that the proposed method yields superior quality reconstructions with substantial savings in computation.
PET image reconstruction incorporating anatomical information using segmented regression
Author(s):
Ching-Han Lance Hsu;
Richard M. Leahy
Show Abstract
We describe a Bayesian PET reconstruction method that incorporates anatomical information extracted from a partial volume segmentation of a co-registered magnetic resonance (MR) image. For the purposes of this paper we concentrate on imaging the brain which we assume can be partitioned into four tissue classes: gray matter, white matter, cerebral spinal fluid, and partial volume. The PET image is then modeled as a piece-wise smooth function through a Gibbs prior. Within homogeneous tissue regions the image intensity is assumed to be governed by a thin plate energy function. Rather than use the anatomical information to guide the formation of a binary process representing region boundaries, we use the segmented anatomical image as a template to customize the Gibbs energy in such a way that we apply thin-plate smoothing within homogeneous tissue regions while enforcing zeroth corder continuity as we transition from homogeneous to partial volume regions. Discontinuities in intensity are allowed only at transitions between two different homogeneous regions. We refer to this model as segmented thin-plate regression with controlled continuities. We present the results of a detailed computer simulated phantom study in which partial volume effects are explicitly modeled. Results indicate that we obtain superior region of interest quantitation using this approach in comparison to a 2D partial volume correction method that has previously been proposed for quantitation using filtered backprojection images.
Adaptive edge enhancement based on image segmentation
Author(s):
Jiang Hsieh
Show Abstract
Many clinical applications, e.g., inner auditory cannel (IAC) studies, demand the CT scanner to provide high spatial in-plane resolution. Currently, these studies are performed by reconstructing the images with a high resolution reconstruction kernel. The cutoff frequency of the kernel is set to the limit of the Nyquist frequency, assuming perfect double sampling per detector cell can be achieved. Because of the fan-beam geometry, patient motion, and the inherent limitations of the third generation CT sampling, the Nyquist criteria are not always strictly observed. As a result, many clinical images are degraded by aliasing artifacts. In many cases, the fine structure of the anatomy and important pathologies are marred by aliasing streaks, which render the image unusable. In this paper, we analyze the root cause of the aliasing artifact and present an adaptive edge enhancement algorithm that enhances the fine structures and suppress aliasing artifacts and noise in the IAC images. In the proposed scheme, a high resolution CT image is first reconstructed with a modified reconstruction kernel, H1(f), which has a frequency response and a cutoff frequency just below the point where significant aliasing artifact can be observed. The reconstructed image is then segmented into two classes (E: enhancement and S: suppression) based on CT numbers as well as texture. Adaptive edge enhancement is performed on the E class and adaptive noise suppression is performed on the S class. Various phantom and clinical studies were conducted. For each case, three images were generated: CT images reconstructed with the conventional high resolution kernel, images reconstructed with the modified H1 kernel, and images produced by the adaptive enhancement algorithm. The results were reviewed by the experts. The conclusion has been fairly consistent that the adaptive edge enhanced images are as sharp as the convectional high resolution CT images, with much reduced noise and aliasing artifacts. Since the segmentation relies on CT numbers as well as the texture in the image, the method is quite robust.
Autoradiographic-based phantoms for emission tomography
Author(s):
Gene R. Gindi;
Doug Dougherty;
Ing-Tsung Hsiao;
Anand Rangarajan
Show Abstract
In the development of reconstruction algorithms in emission computed tomography (ECT), digital phantoms designed to mimic the presumed spatial distribution of radionuclide activity in a human are extensively used. Given the low spatial resolution in ECT, it is usually presumed that a crude phantom, usually with a constant activity level within an anatomically derived region, is sufficiently realistic for testing. Here, we propose that phantoms may be improved by assigning biologically realistic patterns of activity in more precisely delineated regions. Animal autoradiography is proposed as a source of realistic activity and anatomy. We discus the basics of radiopharmaceutical autoradiography and discuss aspects of using such data for a brain phantom. A few crude simulations with brain phantoms derived from animal data are shown.
Modeling the population covariance matrices of block-iterative expectation-maximization reconstructed images
Author(s):
Edward J. Soares;
Charles L. Byrne;
Tin-Su Pan;
Stephen J. Glick;
Michael A. King
Show Abstract
We have analytically derived expressions which, for high signal-to-noise ratio (SNR), approximate the population mean images and covariance matrices of both ordered-subset expectation-maximization (OS-EM) and rescaled block- iterative expectation-maximization (RBI-EM) reconstructed images, using a theoretical-formulation strategy similar to that previously outlined for maximum-likelihood expectation- maximization (ML-EM). The approximate population mean images and approximate population covariance matrices were calculated at various iteration numbers for the two reconstruction methods. The theoretical formulations were verified by calculating the sample mean images and sample covariance matrices for the two reconstruction methods, at the same iteration numbers, using over 8000 noisy images per method. Subsequently, we compared the approximate population and sample mean images, the approximate population and sample variance images, as well as the approximate population and sample local covariance images for a pixel near the center of a uniformly emitting disk object, for each iteration number and reconstruction method, respectively. The results demonstrated that for each method iteration number, the image produced by reconstructing from noise-free data would be equal to the population mean image to a very close approximation. In addition, the theoretically calculated variance and local covariance images closely matched their respective sample counterparts. Thus the theoretical formulation is an accurate way to predict the population first- and second-order statistics of both OS-EM and RBI-EM reconstructed images, for high SNR.
Adaptive feature analysis of false positives for computerized detection of lung nodules in digital chest images
Author(s):
Xin-Wei Xu;
Heber MacMahon;
Maryellen Lissak Giger;
Kunio Doi
Show Abstract
To assist radiologists in diagnosing early lung cancer, we have developed a computer-aided diagnosis (CAD) scheme for automated detection of lung nodules in digital chest images. The database used for this study consisted of two hundred PA chest radiographs, including 100 normals and 100 abnormals. Our CAD scheme has four basic steps, namely, (1) preprocessing, (2) identification of initial nodule candidates (rule-based test #1), (3) grouping of initial nodule candidates into six groups, and (4) elimination of false positives (rule-based test #2 - #5 and artificial neural network). Our CAD scheme achieves, on average, a sensitivity of 70%, with 1.7 false positives per chest image. We believe that this CAD scheme with its current performance is ready for clinical evaluation.
Semisupervised segmentation of MRI stroke studies
Author(s):
Hamid Soltanian-Zadeh;
Joe P. Windham;
Linda Robbins
Show Abstract
Fast, accurate, and reproducible image segmentation is vital to the diagnosis, treatment, and evaluation of many medical situations. We present development and application of a semi-supervised method for segmenting normal and abnormal brain tissues from magnetic resonance images (MRI) of stroke patients. The method does not require manual drawing of the tissue boundaries. It is therefore faster and more reproducible than conventional methods. The steps of the new method are as follows: (1) T2- and T1-weighted MR images are co-registered using a head and hat approach. (2) Intracranial brain volume is segmented from the skull, scalp, and background using a multi-resolution edge tracking algorithm. (3) Additive noise is suppressed (image is restored) using a non-linear edge-preserving filter which preserves partial volume information on average. (4) Image nonuniformities are corrected using a modified lowpass filtering approach. (5) The resulting images are segmented using a self organizing data analysis technique which is similar in principle to the K-means clustering but includes a set of additional heuristic merging and splitting procedures to generate a meaningful segmentation. (6) Segmented regions are labeled white matter, gray matter, CSF, partial volumes of normal tissues, zones of stroke, or partial volumes between stroke and normal tissues. (7) Previous steps are repeated for each slice of the brain and the volume of each tissue type is estimated from the results. Details and significance of each step are explained. Experimental results using a simulation, a phantom, and selected clinical cases are presented.
Fractional dimension filtering for multiscale lung nodule detecton
Author(s):
Fei Mao;
Wei Qian;
Laurence P. Clarke
Show Abstract
Lung nodule (LN) detection using computer assisted diagnostic (CAD) methodology in chest radiolographs is generally composed of two steps, i.e., suspicious area (SA) location and differentiation of 'true' nodules from 'false' nodules among located SAs. The first step is related to computer image processing techniques, such as image enhancement and segmentation methods. The second step uses pattern classification techniques, such as statistical classifiers and artificial neural networks (ANN). This paper will address only the first step of the CAD lung nodule detection. We have designed a novel fractional dimension filtering (FDF) algorithm for the extraction of lung nodule patterns, which generally appear as circular bright areas in the chest radiograph. The FDF provides an improved performance of discriminating circular pattern from other patterns in the presence of overlapping structures. A multiscale analysis has also been introduced to locate multiscale nodules and eliminate false positives. A computed ROC analysis has been performed to show the improvement of discriminating performance of the FDF by using simulated patterns. A computed FROC analysis has also been conducted for analyzing the performance of the proposed location scheme with and without the multiscale analysis.
Channelized detection filters for detecting tumors in nuclear medical images
Author(s):
Derek A. Hutton;
Robin N. Strickland
Show Abstract
The problem of detecting known objects of known location in the presence of stationary noise is well understood, the solution being the prewhitening matched filter. Detecting tumors in nuclear medical images presents a more challenging problem: the object being a mass whose shape and location are not known exactly, and the background anatomy being nonstationary. This paper addresses the latter problem by using simulated images to train a channelized detection algorithm. We show that the detector converges to the prewhitening matched filter providing the signal is known and the noise is stationary. We report on the manner in which the detector departs from the matched filter under more realistic conditions. Using detectability da as the performance measure, this method is tested on simulated tumors in simulated anatomical backgrounds. Results show that for certain channel configurations, the channelized detection filters perform equivalently to the prewhitening matched filter. High detectability is maintained using a limited number of channels and in cases in which the tumor size varies.
Finite-sample effects and resampling plans: applications to linear classifiers in computer-aided diagnosis
Author(s):
Robert F. Wagner;
Heang-Ping Chan;
Berkman Sahiner;
Nicholas Petrick;
Joseph T. Mossoba
Show Abstract
This work provides an application and extension of the analysis of the effect of finite-sample training and test sets on the bias and variance of the classical discriminants as given by Fukunaga. The extension includes new results for the area under the ROC curve, Az. An upper bound on Az is provided by the so-called resubstitution method in which the classifier is trained and tested on the same patients; a lower bound is provided by the hold-out method in which the patient pool is partitioned into trainers and testers. Both methods exhibit a bias in Az with a linear dependence on the inverse of the number of patients Nt used to train the classifier; this leads to the possibility of obtaining an unbiased estimate of the infinite-population performance by a simple regression procedure. We examine the uncertainties in the resulting estimates. Whereas the bias of classifier performance is determined by the finite size of the training sample, the variance is dominated by the finite size of the test sample. This variance is approximately given by the simple result for an equivalent binomial process. A number of applications to the linear classifier are presented in this paper. More general applications, including the quadratic classifier and some elementary neural-network classifiers, are presented in a companion paper.
Mammographic mass detection by stochastic modeling and a multimodular neural network
Author(s):
Huai Li;
Shih-Chung Benedict Lo;
Yue Joseph Wang;
Matthew T. Freedman M.D.;
Seong Ki Mun
Show Abstract
In this paper, we have developed a combined method utilizing morphological operations, a finite generalized Gaussian mixture (FGGM) modeling, and a contextual Bayesian relaxation labeling technique (CBRL) to enhance and extract suspicious masses. A feature space is constructed based on multiple feature extraction from the regions of interest (ROIs). Finally, a multi-modular neural network (MMNN) is employed to distinguish true masses from non-masses. We have applied these methods to test our mammogram database. The true masses in the database were identified by a radiologist with biopsy reports. The results demonstrated that all the areas of suspicious masses in mammograms were extracted in the prescan step using the proposed segmentation procedure. We found that 6 - 15 suspected masses per mammogram were detected and required further evaluation. We also found that the MMNN can reduce the number of suspicious masses with a sensitivity of 84% at 1 - 2 false positive (FP) per mammogram based on the database containing 46 mammograms (23 of them have biopsy proven masses). In conclusion, the experimental results indicate that morphological filtering combined with FGGM model-based segmentation is an effective way to extract mammographic suspicious mass patterns. Compared with conventional neural networks, the probabilistic MMNN can lead to a more efficient learning algorithm and can provide more understanding in the analysis of the distribution patterns of multiple features extracted from the suspicious masses.
Characterization of masses on mammograms: significance of using the rubber band straightening transform
Author(s):
Berkman Sahiner;
Heang-Ping Chan;
Nicholas Petrick;
Mitchell M. Goodsitt;
Mark A. Helvie M.D.
Show Abstract
The rubber-band straightening transform (RBST) was developed for characterization of mammographic masses as malignant or benign. The RBST maps a region surrounding a segmented mass on a mammogram onto the Cartesian plane. In this study, the effectiveness of texture features extracted from the RBST images was compared with the effectiveness of those extracted from the original images. Texture features were extracted from (1) a region of interest (ROI) centered at the mass; (2) a 40-pixel-wide gray-scale region surrounding the perimeter of the mass; and (3) the RBST image. Two types of texture features were extracted; spatial gray level dependence (SGLD) features and run-length statistics (RLS) features. Linear discriminant analysis and leave-one-case- out methods were used for classification in the individual or combined feature spaces. The classification accuracy was evaluated by receiver operating characteristic (ROC) analysis and the area Az under the ROC curve. CLABROC analysis was used to estimate the statistical significance of the difference between features extracted using the three different approaches. On a database of 255 ROIs containing biopsy-proven masses, the Az value was 0.92 when combined SGLD and RLS features extracted from RBST images were used for classification. In comparison, the combined texture features extracted from the entire ROIs and the mass perimeter regions resulted in Az values of 0.83 and 0.85, respectively. The improvement in Az obtained by using RBST images was statistically significant (p less than 0.05). Similar levels of significance were observed when the classification was performed in the SGLD feature space alone or the RLS feature space alone.
Image feature analysis for classification of microcalcifications in digital mammography: neural networks and genetic algorithms
Author(s):
Chris Yuzheng Wu;
Osamu Tsujii;
Matthew T. Freedman M.D.;
Seong Ki Mun
Show Abstract
We have developed an image feature-based algorithm to classify microcalcifications associated with benign and malignant processes in digital mammograms for the diagnosis of breast cancer. The feature-based algorithm is an alternative approach to image based method for classification of microcalcifications in digital mammograms. Microcalcifications can be characterized by a number of quantitative variables describing the underling key features of a suspicious region such as the size, shape, and number of microcalcifications in a cluster. These features are calculated by an automated extraction scheme for each of the selected regions. The features are then used as input to a backpropagation neural network to make a decision regarding the probability of malignancy of a selected region. The initial selection of image features set is a rough estimation that may include redundant and non-discriminant features. A genetic algorithm is employed to select an optimal image feature set from the initial feature set and select an optimized structure of the neural network for the optimal input features. The performance of neural network is compared with that of radiologists in classifying the clusters of microcalcifications. Two set of mammogram cases are used in this study. The first set is from the digital mammography database from the Mammographic Image Analysis Society (MIAS). The second set is from cases collected at Georgetown University Medical Center (GUMC). The diagnostic truth of the cases have been verified by biopsy. The performance of the neural network system is evaluated by ROC analysis. The system of neural network and genetic algorithms improves performance of our previous TRBF neural network. The neural network system was able to classify benign and malignant microcalcifications at a level favorably compared to experienced radiologists. The use of the neural network system can be used to help radiologists reducing the number biopsies in clinical applications. Genetic algorithms are an effective tool to select optimal input features and structure of a backpropagation neural network. The neural network, combined with genetic algorithms, is able to effectively classify benign and malignant microcalcifications. The results of the neural network system can be used to help reducing the number of benign biopsies.
Statistical modeling of lines and structures in mammograms
Author(s):
Reyer Zwiggelaar;
Tim C. Parr;
Caroline R. M. Boggis;
Susan M. Astley;
Christopher J. Taylor
Show Abstract
Computer-aided prompting systems require the reliable detection of a variety of mammographic signs of cancer. The emphasis of the work described in this paper is the correct classification of linear structures in mammograms, especially those associated with spiculated lesions. The detection of spiculated lesions can be based on the detection of the radiating pattern of linear structures associated with these lesions. The accuracy of automated stellate lesion detection algorithms can be improved by differentiating between the linear structures associated with lesions and those occurring in normal tissue. Statistical modeling, based on principal component analysis (PCA), has been developed for describing the cross-sectional profiles of linear structures, the motivation being that the shapes of intensity profiles may be characteristic of the type of structure. For the detection of spiculated lesions the main interest is to classify the linear structures into two classes, spicules and non-spicules. PCA models have been applied to whole mammograms to determine the probability that a particular type of linear structure (e.g. a spicule in this case) is present at any given location in the image.
Unitary ranking in the automated detection of mammographic masses
Author(s):
Nicholas Petrick;
Heang-Ping Chan;
Berkman Sahiner;
Mark A. Helvie M.D.;
Mitchell M. Goodsitt
Show Abstract
We are investigating the utility of a unitary ranking method for the classification of masses and false-positives (FPs) in an automated detection algorithm. In unitary ranking, the scores within individual images are ordered from maximum to minimum. A threshold is then applied to this ordering, or ranking, to determine a final set of potential mass regions. A more commonly used approach is to rank the scores from all the images together and then apply a single threshold to the entire group. This method will be referred to as group ranking. In this study, we compared the free-response receiver operating characteristic (FROC) performance of unitary ranking with group ranking. The results are based on the classification of mammographic regions automatically extracted from 255 digitized mammograms. They indicate that unitary ranking reduces the number of false positive (FP) detections over the group ranking method. In particular, unitary ranking for a 95% true positive detection fraction reduced the FPs by 1% (from 10.1 FPs per image to 10.0), 11% (from 6.3 to 5.6) and 26% (from 3.1 to 2.3) for sets of regions having FP to true positive (TP) ratios of 24:1, 16:1 and 8:1, respectively. This preliminary study indicates that the unitary ranking may be a useful scoring technique in the classification of regions on digital mammograms.
Proposed alternative to the Talairach localization system
Author(s):
Tamas Sandor;
Vladimir I. Valtchinov;
Marilyn Albert;
Ferenc A. Jolesz
Show Abstract
It was shown quantitatively that different head shapes can affect the location of selected anatomic structures of the brain in the Talairach proportional system. Using the MR scans of 20 subjects the variability of localization accuracy across the image plane was assessed. A new proportional system using polar coordinates is proposed that eliminates those shortcomings of the Talairach atlas which are associated with the variance of brain shape across patients.
New 3D Bolton standards: coregistration of biplane x rays and 3D CT
Author(s):
David Dean;
Krishna Subramanyan;
Eun-Kyung Kim
Show Abstract
The Bolton Standards 'normative' cohort (16 males, 16 females) have been invited back to the Bolton-Brush Growth Study Center for new biorthogonal plain film head x-rays and 3D (three dimensional) head CT-scans. A set of 29 3D landmarks were identified on both their biplane head film and 3D CT images. The current 3D CT image is then superimposed onto the landmarks collected from the current biplane head films. Three post-doctoral fellows have collected 37 3D landmarks from the Bolton Standards' 40 - 70 year old biplane head films. These films were captured annually during their growing period (ages 3 - 18). Using 29 of these landmarks the current 3D CT image is next warped (via thin plate spline) to landmarks taken from each participant's 18th year biplane head films, a process that is successively reiterated back to age 3. This process is demonstrated here for one of the Bolton Standards. The outer skull surfaces will be extracted from each warped 3D CT image and an average will be generated for each age/sex group. The resulting longitudinal series of average 'normative' boney skull surface images may be useful for craniofacial patient: diagnosis, treatment planning, stereotactic procedures, and outcomes assessment.
Effect of spatial normalization on analysis of functional data
Author(s):
James C. Gee;
David C. Alsop;
Geoffrey K. Aguirre
Show Abstract
Conventional analysis of functional data often involves a normalization step in which the data are spatially aligned so that a measurement can be made across or between studies. Whether to enhance the signal-to-noise ratio or to detect significant deviations in activation from normal, the method used to register the underlying anatomies clearly impacts the viability of the analysis. Nevertheless, it is common practice to infer only homogeneous transformations, in which all parts of the image volume undergo the same mapping. To detect subtle effects or to extend the analysis to anatomies that exhibit considerable morphological variation, higher dimensional mappings to allow more accurate alignment will be crucial. We describe a Bayesian volumetric warping approach to the normalization problem, which matches local image features between MRI brain volumes, and compares its performance with a standard method (SPM'96) as well as contrast its effect on the analysis of a set of functional MRI studies against that obtained with a 9-parameter affine registration.
Registration of head volume images using implantable fiducial markers
Author(s):
Calvin R. Maurer Jr.;
J. Michael Fitzpatrick;
Matthew Yang Wang;
Robert L. Galloway Jr.;
Robert J. Maciunas;
George S. Allen
Show Abstract
In this paper, we describe an extrinsic point-based, interactive image-guided neurosurgical system designed at Vanderbilt University as part of a collaborative effort among the departments of neurological surgery, computer science, and biomedical engineering. Multimodal image-to- image and image-to-physical registration is accomplished using implantable markers. Physical space tracking is accomplished with optical triangulation. We investigate the theoretical accuracy of point-based registration using numerical simulations, the experimental accuracy of our system using data obtained with a phantom, and the clinical accuracy of our system using data acquired in a prospective clinical trial by six neurosurgeons at four medical centers from 158 patients undergoing craniotomies to resect cerebral lesions. We can determine the position of our markers with an error of approximately 0.4 mm in x-ray computed tomography (CT) and magnetic resonance (MR) images and 0.3 mm in physical space. The theoretical registration error using four such markers distributed around the head in a configuration that is clinically practical is approximately 0.5 - 0.6 mm. The mean CT-physical registration error for the phantom experiments is 0.5 mm and for the clinical data obtained with rigid head fixation during scanning is 0.7 mm. The mean CT-MR registration error for the clinical data obtained without rigid head fixation during scanning is 1.4 mm, which is the highest mean error that we observed. These theoretical and experimental findings indicate that this system is an accurate navigational aid that can provide real-time feedback to the surgeon about anatomical structures encountered in the surgical field.
Partial-volume effect on marker localization in medical density images
Author(s):
Matthew Yang Wang;
Calvin R. Maurer Jr.;
J. Michael Fitzpatrick
Show Abstract
Registration of medical images to each other and to physical space for the purposes of surgical planning and surgical navigation can be accomplished using externally attached fiducial markers. The accuracy of fiducial localization, that is, the accuracy of estimating the position of the marker's centroid, is extremely important because marker- based registration accuracy is proportional to localization accuracy. The traditional method of calculating the marker centroid using intensity weighting contains a serious logic flaw. This paper introduces a novel and efficient method for correcting this flaw. Theoretical analysis, computer simulation, and analysis of clinical images demonstrate the importance of this correction.
Analysis of 3D motion of in-vivo pacemaker leads
Author(s):
Kenneth R. Hoffmann;
Benjamin B. Williams;
Jacqueline Esthappan;
Shiuh-Yung James Chen;
Martin Fiebich;
John D. Carroll;
Hajime Harauchi;
Vince Doerr;
G. Neal Kay;
Allen Eberhardt;
Mary Overland
Show Abstract
In vivo analyses of pacemaker lead motion during the cardiac cycle have become important due to incidences of failure of some of the components. For the calculation and evaluation of in vivo stresses in pacemaker leads, the 3D motion of the lead must be determined. To accomplish this, we have developed a technique for calculation of the overall and relative 3D position, and thereby the 3D motion, of in vivo pacemaker leads through the cardiac cycle.Biplane image sequences of patients with pacemakers were acquired for at least two cardiac cycles. After the patient acquisitions, biplane images of a calibration phantom were obtained. The biplane imaging geometries were calculated from the images of the calibration phantom. Points on the electrodes and the lead centerlines were indicated manually in all acquired images. The indicated points along the leads were then fit using a cubic spline. In each projection, the cumulative arclength along the centerlines in two temporally adjacent images was used to identify corresponding points along the centerlines. To overcome the non-synchronicity of the biplane image acquisition, temporal interpolation was performed using these corresponding points based on a linear scheme. For each time point, corresponding points along the lead centerlines in the pairs of biplane images were identified using epipolar lines. The 3D lead centerlines were calculated from the calculated imaging geometries and the corresponding image points along the lead centerlines. From these data, 3D lead motion and the variations of the lead position with time were calculated and evaluated throughout the cardiac cycle. The reproducibility of the indicated lead centerlines was approximately 0.3 mm. The precision of the calculated rotation matrix and translation vector defining image geometry were approximately 2 mm. 3D positions were reproducible to within 2 mm. Relative positional errors were less than 0.3 mm. Lead motion correlated strongly with phases of the cardiac cycle. Our results indicate that complex motions of in vivo pacemaker leads can be precisely determined. Thus, we believe that this technique will provide precise 3D motion and shapes on which to base subsequent stress analysis of pacemaker lead components.
3D registration of surfaces for change detection in medical images
Author(s):
Elizabeth Fisher;
Paul F. van der Stelt;
Stanley M. Dunn
Show Abstract
Spatial registration of data sets is essential for quantifying changes that take place over time in cases where the position of a patient with respect to the sensor has been altered. Changes within the region of interest can be problematic for automatic methods of registration. This research addresses the problem of automatic 3D registration of surfaces derived from serial, single-modality images for the purpose of quantifying changes over time. The registration algorithm utilizes motion-invariant, curvature- based geometric properties to derive an approximation to an initial rigid transformation to align two image sets. Following the initial registration, changed portions of the surface are detected and excluded before refining the transformation parameters. The performance of the algorithm was tested using simulation experiments. To quantitatively assess the registration, random noise at various levels, known rigid motion transformations, and analytically-defined volume changes were applied to the initial surface data acquired from models of teeth. These simulation experiments demonstrated that the calculated transformation parameters were accurate to within 1.2 percent of the total applied rotation and 2.9 percent of the total applied translation, even at the highest applied noise levels and simulated wear values.
Contour-model-guided nonlinear deformation model for intersubject image registration
Author(s):
Wen-Shiang Vincent Shih;
Wei-Chung Lin;
Chin-Tu Chen
Show Abstract
An automated method is proposed for anatomic standardization that can elastically map one subject's MRI image to a standard reference MRI image to enable inter-subject and cross-group studies. In this method, linear transformations based on bicommissural stereotaxy are first applied to grossly align the input image to the reference image. Then, generalized Hough transform is applied to find the candidate corresponding regions in the input image based on the contour information from the pre-segmented reference image. Next, an active contour model initialized with the result from the generalized Hough transform is employed to refine the contour description of the input image. Based on the contour correspondence established in the previous steps, a non-linear transformation is determined using the proposed weighted local reference coordinate systems to warp the input image. In this method, geometric correspondence established based on contour matching is used to control the warping and the actual image values corresponding to registered coordinates need not be similar. We tested this algorithm on various synthetic and real images for inter- subject registration of MR images.
Registration of functional magnetic resonance imagery using mutual information
Author(s):
Delia P. McGarry;
Theodore R. Jackson;
Michelle B. Plantec;
Neal F. Kassell M.D.;
J. Hunter Downs III
Show Abstract
Accurate statistical correlation of brain activation in functional magnetic resonance image (MRI) studies depends on the reduction of artifacts induced by patient motion. We have addressed this problem in two ways. First, we have eliminated gross movement by the development of an immobilization mask. Second, we have implemented the image registration procedure, mutual information. This registration procedure is used to correct remaining misalignments due to patient motion. We have chosen maximization of mutual information because it is applicable to a broad range of image registration problems because it requires no segmentation, feature extraction, a priori information, or operator-assisted extractions. Initial results, as applied to fMRI data, are also presented. Our results indicate that we have reduced the motion artifacts present in our original data sets with sub-voxel accuracy.
Stenosis parameter assessment from contrast medium tracking in cineangiography with an optical flow method
Author(s):
Bernard Imbert;
Jean Meunier;
Rosaire Mongrain;
Gilles Hudon;
Michel J. Bertrand
Show Abstract
In this paper, we present an optical flow method to infer the blood flow in arteries by tracking the contrast medium in angiography. in our approach, the velocity field is constrained to be parabolic to take into account this particular property of laminar blood flows. With this method, we get several parameters, both hemodynamic and geometric: the artery radius, the maximum velocity, the blood flow, the centerline position of the artery and other related ones. Tests of the algorithm were conducted on simulated cineangiographic images of straight and stenotic vessels and show errors ranging from 1 percent of straight vessels and up to 10 percent for short stenosis. Preliminary results with femoral arteries are also very encouraging.
Core atoms and the spectra of scale
Author(s):
George D. Stetten;
Roxanne N. Landesman;
Stephen M. Pizer
Show Abstract
Our purpose is to characterize figures in medical images as a first step toward finding and measuring anatomical structures. FOr clinical use, we require complete automation and reasonably short computation times. We do not require that a sharp boundary be determined, only that the structure be identified and measurements taken of its size and shape. Our method involves the detection and linking of locations within an image that possess high 'medialness', i.e. locations that are equidistant from two opposing boundaries. The method produces populations of core atoms, each core atom consisting of a center point and the two associated boundary points. We can cluster core atoms by the proximity of their centers and by the similarity of their size. We generate statistical signatures of clusters to identify the underlying figure. In particular, we compute three spectra vs. scale for a cluster, including (1) magnitude: the number of core atoms, (2) eccentricity: their aggregate directional asymmetry, and (3) orientation: their aggregate direction. We illustrate the production of these spectra for various graphical test images, demonstrating translational, rotational, and scale invariance of the spectra, as well as specificity between targets. We observe the effects of image noise on the spectra and show how clustering reduces these effects. Early results suggest that the scale spectra of core atoms provide an efficient and robust method for identifying figures, suitable for practical application in medical image analysis.
Circumferential traversal techniques for characterizing shapes in digital images
Author(s):
Anthony John Maeder
Show Abstract
Characterizing of region shapes in digital images is a common requirement in medical image processing. This paper describes an approach based on successive traversals around the region boundary, enabling a sequence of related shape information at different scales to be constructed. The approach is useful in that it allows several different shape characteristics to be determined using the same set of data. The approach and its implementation is described, and an example of its application to a problem in bio-medical cell discrimination is considered and compared with results from more conventional shape characterization techniques.
Automatic exploration and morphometry/morphology assessment of medical image databases
Author(s):
Alexandre Guimond;
Gerard Subsol;
Jean Meunier;
Jean-Philippe Thirion
Show Abstract
The design of representative models of the human body is of great interest to medical doctors. Qualitative information about the characteristics of the brain is widely available, but due to the volume of information that needs to be analyzed and the complexity of its structure, rarely is there quantification according to a standard model. To address this problem, we propose in this paper an automatic method to retrieve corresponding structures from a database of medical images. This procedure being local and fast, will permit navigation through large databases in a practical amount of time. We present as examples of applications the building of an average volume of interest and preliminary results of classification according to morphology.
Characterization of radiographic trabecular bone structure with Gabor wavelets
Author(s):
Antero Jarvi;
Erkki Tammisalo;
Olli Nevalainen
Show Abstract
We propose a new method for non-invasive measurement of structural properties of radiographic trabecular pattern for a quantitative characterization of trabecular bone architecture. The method actively searches for radiographic shadows that originate from individual trabecular elements. These shadows are used for measuring structural properties of trabecular bone. Filtering with Gabor wavelets is used to produce orientation and scale-specific channels for the extraction of trabecular structures. We give a design of the Gabor wavelet for his application. The performance of our method is visualized with radiographs form various anatomical sites showing the outlines of the detected trabecular elements superimposed on the radiograph.
Object-oriented framework for rapid development of image analysis applications
Author(s):
Weidong Liang;
Xiangmin Zhang;
Milan Sonka
Show Abstract
Image analysis applications are usually composed of a set of graphic objects, a set of image processing algorithms, and a graphic user interface (GUI). Typically, developing an image analysis application is time-consuming and the developed programs are hard to maintain. We have developed a framework called IMANAL that aims at reducing the development costs by improving system maintainability, design change flexibility, component reusability, and human-computer interaction. IMANAL decomposes an image analysis application into three models; data model, process model, and GUI model. The three models as well as the collaboration among them are standardized into a unified system architecture. A new application can be developed rapidly by customizing task- specific building blocks within the unified architecture. IMANAL maintains a class library of more than 100,000 lines of C/C++ code that are highly reusable for creating the three above mentioned models. Software components from other sources such as Khoros can also be easily included in the applications. IMANAL was used for development of image analysis applications utilizing a variety of medical images such as x-ray coronary angiography, intracardiac, intravascular and brachial ultrasound, and pulmonary CT. In all the above listed applications, the development overhead is removed and the developer is able to fully focus on the image analysis algorithms. IMANAL has proven to be a useful tool for image analysis research as well as the prototype development tool for commercial image analysis applications.
Multidimensional approach to medical image processing
Author(s):
Xavier L. Battle;
Yves Bizais
Show Abstract
This paper addresses the issue of writing image processing algorithms and programs that are independent of the dimension of the dataset. Such an approach aims at writing libraries and tool-boxes that will be smaller as well as easier to debug. The data to be processed is stored in a multi-dimensional, self-documented format describing, not only the content of the image, but also its context and the conditions of its acquisition. The work presented in this paper is based on the image kernel of the MIMOSA standard. We propose a recursive programming scheme that allows one to write general algorithms for such multi-dimensional images. Oddly enough, the design of such algorithms is easy and intuitive, thanks to the recursion. Moreover,the computational costs remains comparable to the one of dimension-specific algorithms. The cost of the recursion is indeed negligible compared to the cost of non trivial processings. We present an implementation of a reduced version of the MIMOSA image kernel, show how elementary processing such as convolution and filtering can be easily implemented. Finally we propose an algorithm for the nD fast fourier transform operating on real data.
Effective chromatic texture coding for robust skin disease minimal descriptor quantification
Author(s):
Rodolfo A. Fiorini;
M. Crivellini;
G. Codagnone;
Gianfranco F. Dacquino;
M. Libertini;
A. Morresi
Show Abstract
Among the various skin diseases skin tumors are the most serious ones and skin melanoma is particularly dangerous. Its malignant evolution lasts about 5 or 6 years and ends with the death of the patient. Early diagnosis is a powerful means of preventing this evolution allowing sudden intervention which increases probability of recover and survival. Purpose of this paper is to present an active support system (ASS) able to reveal and quantify the stage of disease evolution. The work focuses the problem encountered in chromatic information encoding the morphological aspects quantification. A new method is proposed which permits robust and reliable quantification of image data obtained via a digital epiluminescence dermatoscopy apparatus (DELM) designed and built with interesting new features. The image information extraction is based on minimal descriptor set of parameters in order to classify chromatic texture and morphological features. The active support systems is based on DELM technique, taking advantage of polarized light guided by optical fibers. In the purpose to discriminate between malignant and benign melanocytic lesions, several dermatoscopical features have been proposed by different research groups. Nevertheless many are the attempts to reach a reliable and objective classification procedure. We adopt, as reference, the approach used by Stanganelli and Kenet. Through a bioengineering analysis we can organize reference grids that offer the possibility to extract the maximum information content from dermatological data. The classification takes into account the Spread and Intrinsic Descriptors and correspond to the best operative description. Therefore these grids are the more suitable tools for application which require ASS for diagnosis. In fact it is possible to obtain quantitative evaluations too. We propose a method based on geometrical synthetical descriptors. All that permits a reliable early diagnosis of melanotic disease and to follow its evolution in time. The results obtained allow for disease classification procedure with determination of reference grids for pathological cases and ultimately permits effective early diagnosis of melanotic disease and its follow-up. The first results and the incoming work points to the realization of an Automatic Support System for general dermatological applications.3034
Mean scatterer-spacing estimation using the complex cepstrum
Author(s):
Rashidus S. Mia;
Murray H. Loew;
Keith A. Wear;
Robert F. Wagner
Show Abstract
This paper presents a new method of estimating the distance between reguIarIyspaced coherent scatterers within . soft tissue from backscattered radio-frequency (RF) signals. This scatterer spacing has been used successfully to classify tissue type, diagnose diffuse liver and kidney disease, and to diagnose Hodgldn's disease involvement in the spleen. This new method makes use of the complex cepstrum to identify periodic structure in the backscattered ultrasound signals. Periodic components in the time domain RF signal manifest themselves as peaks in the quefrency (cepstral) domain. The task of estimating the scatterer spacing is then reduced to identifying peaks in the cepstruni. Using simulation data, we show that peaks in the quefrency domain (corresponding to known periodic components in the RB signal) are easier to detect when the cepstrum is computed using the complex cepstrum rather than the commonly used power cepstrum. Similar improvements are seen using phantom and in vivo liver data. Keywords: ultrasound, scatterer spacing, power cepstrum, complex cepstruni, periodicity.
Edge detection in image sequence processing
Author(s):
Klaus Haarbeck;
Johannes Bernarding;
Brian Lofy;
Jack Sklansky;
Thomas Tolxdorff
Show Abstract
Images with a low signal-to-noise ratio (SNR) were processed with different algorithms based on the anisotropic diffusion (AD). This algorithm reduces the noise while preserving or enhancing the edges. Since sequences provide more image information, we developed an extension of the AD. In the modified AD the diffusion coefficients are used to vary the contrast normalization of successive frames. The Chamfer distance was used to measure the displacement of edges between the original and the processed images. Phantom images with varying gray levels and SNR, with fluctuating borders and with gross distortions were tested, as were clinical ultrasound images of the abdomen. The feed forward anisotropic diffusion (FFAD) scheme showed improved edge preserving capability for the phantom images as compared to the AD for the phantom images. Transferring the image information stabilized the edge detection even in cases where gross distortions or fluctuating contrast due to overall signal intensity changes led to geometric shifts in AD. Applying the FFAD to ultrasound images, the differences were less pronounced partly because of the different noise behavior.
Keywords: Noise reduction, image-sequence processing, anisotropic diffusion, edge detection
Toward retinal vessel parameterization
Author(s):
Xiaohong Gao;
Anil A. Bharath;
Alun J. Hughes;
Alice Stanton;
Neil Chapman;
Simon Thom
Show Abstract
The reliable measurement of retinal vessel geometry in ordinary red-free fundus photographs is a challenging issue. Quite apart from refractive effects which can distort absolute measures, the existence of the central light reflex poses some interesting problems in vessel modelling and segmentation. One aim of our research is to obtain subpixel vessel width precision from digitised retinal photographs. For this purpose, we require a reasonable physical and mathematical model for the vessel intensity profile. In this paper, we review physical principles behind the image formation, leading to a twin Gaussian model. We test 4 non-linear parametric models based on Gaussian forms. Some applications of this model-based approach are introduced, including the parametric detennination of vessel centre and the accurate determination of vessel width in typical noisy clinical images.
Key Words : retinal imaging. non-linear parametric models, parametric image models, sub-pixel resolution
CAD system for full-digital mammography and its evaluation
Author(s):
Hidefumi Kobatake;
Kenichi Okuno;
Masayuki Murakami;
Masamitsu Ishida;
Hideya Takeo;
Shigeru Nawano
Show Abstract
The purpose of this study is to develop a clinical intelligent workstation for computer-aided diagnosis (CAD) of breast cancer using full digital mammography. It consists of a clinical workstation and Fuji Computed Radiography 9000 System. New image processing methods to extract tumor masses and clustered microcalcifications have been developed and implemented in the CAD system. A new filter called Iris Filter has been developed to detect tumor candidates. It realizes reliable detection of tumor candidates regardless of their sizes and their contrast against their background on mammograms. And a new method based on mathematical morphology has been developed to detect microcalcifications. It is adaptive to the imaging conditions of mammograms. One thousand, two hundred and twelve CR images, which include 240 malignant tumors, were used to test the performance of the system. The sensitivity for malignant tumors was 90. 5% and the average number of false positives per image were only 1.3. The true positive detection rate for clustered microcalcifications was 89.2% and the average number of false positives per image were 0.36. The high true positive rates and the low false positive detection characterize the proposed fulldigital CAD system, which shows the possibility of practical application of computer aided diagnosis of breast cancer.
Keywords: breast cancer, mammography, CAD, malignant tumors, microcalcijIcations, irisfilter. morphological processing
New approach in knowledge-based automatic interpretation of CT skull images
Author(s):
Vincente Grau Colomer;
Mariano Luis Alcaniz-Raya;
Christian Juan Knoll;
Salvador Estela Albalat;
M. Carmen Juan
Show Abstract
In this paper we present an automatic system for the segmentation and recognition of different tissues in maxillofacial CT images. The system is designed as a low level segmentation (LLS) module and a brain module performing high level segmentation (HLS) to dynamically validate anatomic information of these structures. Our procedure differs from previous attempts in its use of advanced low level segmentation operators and specific knowledge bases that embody knowledge about tissue characteristics, and not about specific anatomical structures or organs. In order to augment the confidence and accuracy in segmentation of teeth and implants, which are very similar to adjacent cortical(and espongiosa-) tissues, a spatial matched filter approach is applied, which allows a shape sensitive target detection and initial approximation of trained object forms. System results tested on CT images from five patients running on a PC based hardware are very promising both in accuracy and processing time. The developed system has applications in dental implantology, allowing the optimisation of surgery 3D planning in low cost PC-based workstations.
Keywords: expert systems, image segmentation, maxillofacial surgery planning, spatial matched filters
Automatic algorithm for registering digital images of multiple skin lesions
Author(s):
Bruce McGregor
Show Abstract
Malignant melanoma rates are rising rapidly in white populations. Early detection, primarily through observation of new or changing pigmented skin lesions, is crucial for patient survival. Unfortunately these changes are difficult to detect on high-risk patients who have large number of lesions. Photographic records could be invaluable in these cases, but images must first be "registered" to remove distortions introduced by varying camera angles and subject postures. This paper describes a computer algorithm for registering pairs of digital images of multiple skin lesions automatically. The algorithm comprises four steps: (1) locate and label the lesions within each image; (2) identify matching lesions based on local spatial relationships; (3) test and refine lesion matches based on global spatial relationships; and (4) transform one image. Results are presented of the algorithm's performance on images of both simulated and actual skin lesions. The findings demonstrate that registration is successful under realistic imaging conditions.
Keywords: image registration, skin lesions, melanoma, matching, point pattern
Skeletal muscle sonography with texture analysis
Author(s):
Regina Pohle;
Ludwig von Rohden;
Dagmar Fisher
Show Abstract
In this paper a computer system is presented which is aided to support the physician in the evaluation of muscle ultrasound images. For this purpose a multitude of texture features are calculated for each region of interest (ROl), from which one optimal subset is selected for the different diagnosis problems. The results achieved so far are presented and possibilities for improvement are discussed.
Keywords: texture analysis, tissue characterization, feature extraction, ultrasound, neuromuscular diseases
Wavelet-based feature extraction for mammographic lesion recognition
Author(s):
Lori Mann Bruce;
Reza R. Adhami
Show Abstract
In this paper, multiresolution analysis, specifically the discrete wavelet transform modulus-maximus method, is utilized for the extraction of mammographic lesion shape features. These shape features are used in a classification system to classify lesions as cysts, fibroadenomas, or carcinomas. The multiresolution shape features are compared with traditional uniresolution shape features for their class discriminating abilities. The study involves 60 digitized mammographic images. The lesions are segmented prior to introduction to the classification system. The uniresolution and multiresolution shape features are calculated using the radial distance measure of the lesion boundaries. The discriminating power of the shape features are analyzed via linear discriminant analysis. The classification system utilizes a simple Euclidean distance measure to determine class membership. The system is tested using the apparent and leave-one-out test methods. The results of the classification system when using the multiresolution and uniresolution shape features are classification rates of 83% and 80% for the apparent and leave-one-out test methods, respectively. These results are compared with those of the system when using only the uniresolution shape features. The uniresolution classification rates are 72% and 68% for the apparent and leave-one-out test methods, respectively.
Keywords: wavelet transform, multiresolution analysis, feature extraction, classification, shape, mammography, image processing
Hybrid adaptive wavelet-based CAD method for mass detection
Author(s):
Wei Qian;
Lihua Li;
Fei Mao;
Laurence P. Clarke
Show Abstract
The theoretical basis for an adaptive multiresolution and multiorientation wavelet transform methods for image preprocessing is described for improved CAD performance in breast cancer screening using digital maniinography. The method is an extension of an earlier method reported that uses fixed parameters for the multiorientation wavelet transforms. Simulation results are described to demonstrate the importance of using higher order transforms. The computed FROC results are summarized for the previously reported non-adaptive methods, compare well to that reported in the literature, to demonstrate the potential improvement if adaptive methods are used.
Automated segmentation of anatomic regions in chest radiographs using an adaptive-sized hybrid neural network
Author(s):
Osamu Tsujii;
Matthew T. Freedman M.D.;
Seong Ki Mun
Show Abstract
The purposes of this research are to investigate the effectiveness of our novel image features for segmentation of anatomic regions such as the lungs and the mediastinum in chest radiographs and to develop an automatic computerized method for image processing. A total of 85 screening chest radiographs from Johns Hopkins University Hospital were digitized to 2K by 2.5K pixels with 12-bit gray scale. To reduce the amount of information, the images were smoothed and subsampled to 256 by 310 pixels with 8-bit. The determination approach consists of classifying each pixel into two anatomic classes (lung and others) on the basis of several image features: (1) relative pixel address (Rx, Ry) based on lung edges extracted through image processing using profile, (2) density nonnalized from lungs and mediastinum density, and (3) histogram equalized entropy. The combinations of image features were evaluated using an adaptive-sized hybrid neural network (ASH-NN) consisting of an input, a hidden and an output layer. Fourteen images were used for the training of the neural network and the remaining 71 images for testing. Using four features of relative address (Rx,Ry), normalized density, and histogram equalized entropy, the neural networks classified lungs at 92% accuracy against test images following the same rules as for the training images.
Key Words: Chest Radiograph, Anatomic Region, Segmentation, Image Feature, Neural Network, Self-Organization
Anatomic region-based dynamic range compression for chest radiographs using warping transformation of correlated distribution
Author(s):
Osamu Tsujii;
Matthew T. Freedman M.D.;
Seong Ki Mun
Show Abstract
The purpose of this research is to investigate the effectiveness of our novel dynamic range compression for chest radiographs. Dynamic range compression preserves detail information, making diagnosis easier when using narrow dynamic range viewing systems such as monitors. First, an automated segmentation method was used to detect the lung region. The combined region of mediastinum, heart and subdiaphragm was defmed based on the lung region. The coffelated distributions, between a pixel value and its neighboring averaged pixel value, for the lung region and the combined region were calculated. According to the appearance of overlapping of two distributions, the warping function was decided. After pixel values were warped, the pixel value range of the lung region was compressed while preserving the detail information. The perfonnance was evaluated with our criterion function which was the contrast divided by the moment. For seventy-one screening chest images from Johns Hopkins University Hospital, this method improved our criterion function at 1 1 .7% on average. The warping transformation algorithm based on the correlated distribution was effective in compressing the dynamic range while simultaneously preserving the detail information.
Key Words: Dynamic Range Compression, Chest Radiograph, Anatomic Information, Lung Region, Image Processing, Correlated Distribution, Warping Transformation
Development of an automatic follicle isolation tool for ovarian ultrasonographic images
Author(s):
Gordon E. Sarty;
Milan Sonka;
Weidong Liang;
Roger A. Pierson
Show Abstract
PURPOSE: To develop an automatic computer algorithm for isolating the follicles in ovarian ultrasonographic images. METHODS: A semi-automatic algorithm has been developed as the first step in the development of a totally automatic follicle isolation tool for use with ovarian ultrasonographic imaging. The algorithm is knowledge-based and depends upon the use of a priori information about the structure of the follicle. Graph searching techniques are used along with a method of assigning graph node costs that represent edge information combined with the a priori knowledge.Interactive identification of the follicle of interest is followed by, in some cases, a manual editing of an automatically defined interior boundary. After the interior boundary has been defined, the outer follicle wall border is found without further human intervention. RESULTS: Based on a test with 31 ultrasonographic images of women's ovaries made in vivo, the algorithm is able to locate the outer follicle wall to an rms accuracy of 0.59 mm +/- 0.28 mm in comparison with human expert manual boundary tracing. BREAKTHROUGHS: The isolation of ovarian follicles in ultrasonographic imaging has heretofore only been accomplished by manual tracing. Our semi-automated follicle border finding algorithm is, to the best of our knowledge, the first computerized method capable of finding the outer wall boundary of the follicle. CONCLUSIONS: The success of our semi-automatic follicle isolation algorithm clearly demonstrates the feasibility of a totally automatic tool that should have wide application in ultrasonographic studies of ovarian follicular dynamics.
Three-dimensional reconstruction of the coronary arterial tree from several sets of biplane angiograms with simultaneous estimation of imaging geometry
Author(s):
Naozo Sugimoto;
Chikao Uyama;
Shinobu Mizuta;
Hiroshi Watabe;
Shin-ichi Urayama
Show Abstract
A new method is proposed for reconstructing the 3D structure of the coronary arterial tree from angiograms. Instead of identification of corresponding points on the images, several sets of biplane angiograms are used, and the parameters of the imaging geometries are simultaneously estimated. Several sets of biplane angiograms are usually obtained during one angiographic test. However, only one set of biplane angiogram is usually used for 3D reconstruction of the coronary arterial tree. If only one set of biplane angiogram is used for 3D reconstruction, it is necessary to identify corresponding points on both images. Identification of correspondent points on both images is, however, very difficult and often impossible. To overcome this difficulty, we use several sets of biplane angiograms for 3D reconstruction. If the precise parameters of the imaging geometries are known, the 3D structure of the coronary arterial tree can be obtained by back parameters of the imaging geometries are known, the 3D structure of the coronary arterial tree can be obtained by back projecting each angiogram. However, only the approximate parameters of the imaging geometries are usually known. Therefore, we developed a method for 3D reconstruction of a coronary arterial tree with simultaneous estimation of the imaging geometry. In this paper, we present the algorithm for our method and demonstrate the application to clinical data.
Simultaneous process of automated 3D registration and segmentation on medical images
Author(s):
Shinobu Mizuta;
Shin-ichi Urayama;
Hiroshi Watabe;
Naozo Sugimoto;
Chikao Uyama
Show Abstract
we propose a novel method that deals with simultaneous process of registration among images and segmentation of image using plural images for 3D head images of different modalities in identical subject. In this method, image segmentation is performed by using the result of vector quantification (VQ) for multi-dimensional feature space distribution that describes the relation of voxel value. Registration is carried out by optimization of parameters of translation and rotation using the minimization of VQ distortion. First, we examined characteristics of feature space histogram using simplified head images. Next, we showed the usefulness of proposed method for these images. Here, we estimated the measure of VQ distortion and automated method to extract VQ centroids. Finally, an example that applied this method to T1 emphasized MR image and cbf-PET image was shown.
Adaptive segmentation of an x-ray CT image using vector quantization
Author(s):
Lihua Li;
Wei Qian;
Laurence P. Clarke
Show Abstract
This paper is part of a feasibility study of using an image segmentation method to automatically identify the tumor or target boundaries in each axial slice or to assist an expert physician to manually draw these boundaries.A two-stage segmentation method is proposed. In the first step, the outlying bone structure is removed from the raw CT data and the brain parenchymal area is extracted. Then a VQ-based method is applied for the segmentation of the soft tissue inside the brain area. Representative results for two sets of x-ray CT axial slice images from tow patients are presented. Problems and further modifications are discussed.
Automatic shape analysis and classification of mammographic calcifications
Author(s):
Marios A. Gavrielides;
Maria Kallergi;
Laurence P. Clarke
Show Abstract
The morphology and distribution of mammographic calcifications and the way these elements vary within a cluster are valuable in distinguishing between benign and malignant calcifications. The specific aims of this study were (a) the development of an automatic tool that differentiates between benign and malignant clustered calcifications based on their morphological properties and (b) the determination of the effects of image spatial resolution on the classification process. The long term aim of the project is to use this tool to categorize detected clusters into the various types described in the breast imaging reporting and data systems of the American College of Radiology and assist the radiologists in their diagnosis.
Quantitative core-based shape comparison
Author(s):
Kevin O. Lepard;
Richard A. Robb
Show Abstract
Comparison of shapes is at best a difficult problem. Although many methods of measuring shapes are available, such as circularity, Fourier descriptors, and invariant moments, these methods generally suffer from one or more of the following drawbacks, (1) requiring previous segmentation of the shape, (2) inability to relate the metric intuitively to the shape, and (3) inability to describe local features of object shape. We describe two new metrics based on cores: the average chamfer distance and the average fractional difference. These metrics do not require prior segmentation of objects, can be used to describe local features of object shape, and are intuitively related to degree of shape similarity or dissimilarity. Furthermore, we demonstrate that these metrics are well-behaved, producing output that varies in a predictable fashion, increasing in value as shapes become increasingly different and decreasing in value as shapes become increasing similar.
Unsupervised sputum color image segmentation for lung cancer diagnosis based on a Hopfield neural network
Author(s):
Rachid Sammouda;
Noboru Niki;
Hiroshi Nishitani;
S. Nakamura;
Shinichiro Mori
Show Abstract
The paper presents a method for automatic segmentation of sputum cells with color images, to develop an efficient algorithm for lung cancer diagnosis based on a Hopfield neural network. We formulate the segmentation problem as a minimization of an energy function constructed with two terms, the cost-term as a sum of squared errors, and the second term a temporary noise added to the network as an excitation to escape certain local minima with the result of being closer to the global minimum. To increase the accuracy in segmenting the regions of interest, a preclassification technique is used to extract the sputum cell regions within the color image and remove those of the debris cells. The former is then given with the raw image to the input of Hopfield neural network to make a crisp segmentation by assigning each pixel to label such as background, cytoplasm, and nucleus. The proposed technique has yielded correct segmentation of complex scene of sputum prepared by ordinary manual staining method in most of the tested images selected from our database containing thousands of sputum color images.
Medical image segmentation using 3D seeded region growing
Author(s):
R. Kyle Justice;
Ernest M. Stokely;
John S. Strobel;
Raymond E. Ideker M.D.;
William M. Smith
Show Abstract
A flexible framework for medical image segmentation has been developed. The semi-automatic method effectively segments imaging data volumes through the use of 3D region growing guided by initial seed points. Seed voxels may be specified interactively with a mouse or through the selection of intensity thresholds. Segmentation proceeds automatically following seed selection on only a few slices in the volume due to the 3D nature of the region growth. Computational efficiency is realized by utilizing fast data structures. The 3D region growing algorithm has been used for a variety of segmentation tasks. Magnetic resonance (MR) brain volumes acquired at all three imaging orientations have been accurately segmented. The method also was applied to clinical short-axis cardiac data sets for the extraction of the endocardial blood pool. Additionally, preliminary results indicate that myocardial infarcts from high resolution MR images of formalin-fixed hearts may be segmented using our region growing approach. The algorithm is not confined to a particular imaging modality or orientation. It makes use of information in the third dimension, resulting in increased accuracy. Moreover, the entire method can be implemented in a short amount of time due to its simplicity.
Mammographic feature generator for evaluation of image analysis algorithms
Author(s):
Janne J. Naeppi;
Peter B. Dean
Show Abstract
We introduce a mammographic feature generator which can be used to evaluate image analysis algorithms for digital mammography. Several types of calcifications and focal lesions may be generated, synthetic noise may be added, and the features may be embedded in to a digital mammogram. As an application we demonstrate the use of the generator in comparing the performance of two feature extraction algorithms.
3D nonlinear registration algorithm for brain SPECT imaging within the Talairach reference system
Author(s):
Jean Meunier;
Bernard Imbert;
Christian Janicki;
Alexandre Guimond;
Jean-Paul Soucy
Show Abstract
The comparison of SPECT volumes requires their accurate registration. For this purpose, we present a non-linear registration method for SPECT brain studies that is independent of any other imaging modalities or head fixation devices. The registration of two brain SPECT volumes is carried out in two main steps. First the SPECT volumes must be aligned along a unique coordinate system; in this study, the standard Talairach and Tournoux's reference system is adopted. Second, a 'fine tuning' registration is needed to better take to account individual brain sizes and morphologies. In order to validate the algorithm quantitatively, a set of 64 X 64 SPECT Monte-Carlo simulations were produced with numerical brain phantoms as input, obtained from segmented MRI brain images. The intercomisural line which defines the main axis of the Talairach and Tournoux's reference system was detected accurately in both position and angulation with errors typically less than 0.15 pixel and 2.5 degrees respectively. As for the second step, results show that an optical flow approach can be used successfully to match precisely two brains and that a difference map can be computed at the same time for statistical inference with RMS differences being essentially due to noise or activity differences peculiar to each brains.
Artificial neural networks in chest radiographs: detection and characterization of interstitial lung disease
Author(s):
Takayuki Ishida;
Shigehiko Katsuragawa;
Kazuto Ashizawa;
Heber MacMahon;
Kunio Doi
Show Abstract
We have developed a computerized scheme for detection of interstitial lung disease by using artificial neural networks (ANNs) on quantitative analysis of digital image data. Three separate ANNs wee applied for the ANN scheme. The first ANN was trained with horizontal profiles in the ROIs selected from digital chest radiographs. The second ANN was trained with vertical output pattern obtained from the 1st ANN in each ROI. The output from the 2nd ANN was used to distinguish between normal and abnormal ROIs. In order to improve the performance, we attempted a density correction and rib edge removal. The Az value was improved from 0.906 to 0.934 by incorporating density correction. For the classification of each chest image, we employed a rule-based method and a rule-based plus the third ANN method. A high Az value was obtained with the rule-based plus ANN method. The ANNs can learn certain statistical properties associate with patterns of interstitial infiltrates in chest radiographs.
Morphological filtering and multiresolution fusion for mammographic microcalcification detection
Author(s):
Lulin Chen;
Chang Wen Chen;
Kevin J. Parker
Show Abstract
Mammographic images are often of relatively low contrast and poor sharpness with non-stationary background or clutter and are usually corrupted by noise. In this paper, we propose a new method for microcalcification detection using gray scale morphological filtering followed by multiresolution fusion and present a unified general filtering form called the local operating transformation for whitening filtering and adaptive thresholding. The gray scale morphological filters are used to remove all large areas that are considered as non-stationary background or clutter variations, i.e., to prewhiten images. The multiresolution fusion decision is based on matched filter theory. In addition to the normal matched filter, the Laplacian matched filter which is directly related through the wavelet transforms to multiresolution analysis is exploited for microcalcification feature detection. At the multiresolution fusion stage, the region growing techniques are used in each resolution level. The parent-child relations between resolution levels are adopted to make final detection decision. FROC is computed from test on the Nijmegen database.
Computer-assisted analysis of the extracellular matrix of connective tissue
Author(s):
Slawomir Krucinski;
Izabella Krucinska;
Srinivasan Veeravanallur;
Krzysztof Slot
Show Abstract
The new computerized imaging, circular polarized light microscopy technique was developed to measure the orientation of collagen fibers in images of serial sections of connective tissue. The system consists of a modified Olympus BX50 polarized microscope, a Sony AVC-D7 video camera, and a Silicon Graphics Indy computer. Both methods required the initial segmentation of fibers and used binary images. Segments of fiber midlines were traced with vertical and horizontal scanlines, or alternately the whole midlines were identified recursively from the Euclidean Distance Map of the image suing the novel definition of the Medial Axis Transform. The last technique produced connected midlines of the fibers and handled sinuous fibers well. The fiber midlines produced by this technique were traversed by a midline traversal algorithm , and the orientation distribution was obtained by least squares line fitting. The accuracy of the developed techniques was evaluated against synthetic images, composed of straight lines and sinuous curves. Kupier's statistic was used to evaluate the consistency of the fiber orientation calculations. Statistical analysis of the results showed, that the proposed Medial Axis Transform with Hilditch's connectivity preserving skeletonization produced the most accurate results. The developed method was used to measure collagen fiber orientation in microscopy images of canine meniscus, porcine aortic valve leaflet, bovine pericardium and bio- textiles.
Shape analysis of pulmonary nodules based on thin-section CT images
Author(s):
Yoshiki Kawata;
Noboru Niki;
Hironobu Ohmatsu;
Kenji Eguchi;
Noriyuki Moriyama
Show Abstract
Shape characterization of small pulmonary nodules plays a significant role in differential diagnosis that discriminates malignant and benign nodules at early stages of pulmonary lesion development. This paper presents a method to characterize small pulmonary nodules based on the morphology of the development of lung lesions in thin section CT images. The feature extraction process focuses on the difference between the malignant and benign surface characteristics. Experiments to show its feasibility to improve the diagnostic accuracy are also demonstrated by applying the algorithm to eighteen cases including twelve malignant and six benign nodules.
Computer aided diagnosis system for lung cancer based on helical CT images
Author(s):
Shunsuke Toshioka;
Keizo Kanazawa;
Noboru Niki;
Hitoshi Satoh;
Hironobu Ohmatsu;
Kenji Eguchi;
Noriyuki Moriyama
Show Abstract
Lung cancer is known as one of the most difficult cancers to cure. The detection of lung cancer in its early stage can be helpful for medical treatment to limit the danger. A conventional technique that assists the detection uses helical CT, which provides information of 3D cross sectional images of the lung. We expect that the proposed technique will increase diagnostic confidence. However mass screening based on helical CT images leads to a considerable number of images for diagnosis, this time-consuming fact makes it difficult to be used in the clinic. To increase the efficiency of the mass screening process, we developed an algorithm for automatic detection of lung cancer candidates based on the helical CT images. Our algorithm consists of analysis and diagnosis procedures. In the analysis procedure, we extract the lung regions and the pulmonary blood vessel regions and we analyze the features of these regions using image processing techniques In the diagnosis procedure, we define diagnosis rules based on these features, and we detect tumor candidates using these rules. The diagnostic algorithm is applied to the helical CT images of 450 cases which have been diagnosed by three radiologists. Our system detected all tumors which were suspected to be lung cancer by the experts. Currently, we are planning to carry out a field test using our algorithm to evaluate the efficiency for visual diagnosis.
Automatic recognition of image contents using textural information and a synergetic classifier
Author(s):
Frank Weiler;
Frank Vogelsang;
Markus W. Kilbinger;
Berthold B. Wein;
Rolf W. Guenther
Show Abstract
We describe a method to automatically detect which kind of x-ray a given image is. Based on a feature space which represents the textural information of the image, using second order statistics, a synergetic classifier is used. The synergetic classifier is especially studied for high dimensional feature spaces and has the advantage of nearly automatic parameterization. Within a test suite consisting of numerous example images the practicability of the method is demonstrated.
Evaluation of fuzzy-neighborhood filters in medical imaging
Author(s):
Sevald Berg;
Bjoern Olstad
Show Abstract
In this paper we adapt the framework of a new class of nonlinear digital filters based on fuzzy connected neighborhoods to medical imaging and compare it with other families of digital image filters. An introduction to filters based on fuzzy neighborhoods is given. The framework is adapted to different medical imaging modalities. We discuss the underlying definition of the distance metric in the given applications and appropriate hard constraints in the extraction of pixel neighborhoods. The study demonstrates that fuzzy neighborhood filters are very useful in medical imaging. The fuzzy neighborhood approach offers more flexibility in the design of the filter and in the stability of the filtering process. The fuzzy connected neighborhood filters can perform strong filtering and still maintain fine, thin structures. The filter will not blur edges and this fact is used to define spatio-temporal filters for ultrasonic video sequences.
Neural net classification of liver ultrasonogram for quantitative evaluation of diffuse liver disease
Author(s):
Dong Hyuk Lee;
JongHyo Kim;
Hee Chan Kim;
Yong Woo Lee;
Byong Goo Min
Show Abstract
There have been a number of studies on the quantitative evaluation of diffuse liver disease by using texture analysis technique. However, the previous studies have been focused on the classification between only normal and abnormal pattern based on textural properties, resulting in lack of clinically useful information about the progressive status of liver disease. Considering our collaborative research experience with clinical experts, we judged that not only texture information but also several shape properties are necessary in order to successfully classify between various states of disease with liver ultrasonogram. Nine image parameters were selected experimentally. One of these was texture parameter and others were shape parameters measured as length, area and curvature. We have developed a neural-net algorithm that classifies liver ultrasonogram into 9 categories of liver disease: 3 main category and 3 sub-steps for each. Nine parameters were collected semi- automatically from the user by using graphical user interface tool, and then processed to give a grade for each parameter. Classifying algorithm consists of two steps. At the first step, each parameter was graded into pre-defined levels using neural network. in the next step, neural network classifier determined disease status using graded nine parameters. We implemented a PC based computer-assist diagnosis workstation and installed it in radiology department of Seoul National University Hospital. Using this workstation we collected 662 cases during 6 months. Some of these were used for training and others were used for evaluating accuracy of the developed algorithm. As a conclusion, a liver ultrasonogram classifying algorithm was developed using both texture and shape parameters and neural network classifier. Preliminary results indicate that the proposed algorithm is useful for evaluation of diffuse liver disease.
New approach for the pelvic registration and fusion of CT and MR images
Author(s):
Yuh-Hwan Liu;
Yung-Nien Sun;
Jong-Iuan Chiou
Show Abstract
This paper presents a two-passed registration method to register the pelvic CT and MR images. The geometrical relationship between CT and Mr images is determined by some uniquely selected internal landmarks which are all located on the coxal bone. Thus, it can be assumed to be a rigid transformation. In the first passed registration, the relative feature vectors, such as the normal vectors, such as the normal vectors of acetabular rim planes and the vector connecting two centroids of rims, are extracted and used for registration. The relative feature vectors determined based on the edge of acetabular rims are one of few features which can be observed on both CT and Mr images. The registration results based on the relative feature vectors are less influenced by the variation in the accuracy of the detection of absolute feature points. In the second passed registration, the corner points of sacrum are used to eliminate the distortion in z-directions. The least square error approximation is used to obtain the transformation matrix in both registration passes. In addition, a complete system is developed to provide clinicians with all image processing operations and visualization. For the visualization of fused data, 2D overlapping display and 3D transparent display are used to illustrate the correspondence of different structures, including bones and soft tissues. The fused images well demonstrate the information from two complementary modalities and are highly appreciated in clinical applications.
Using scout images to reduce metal artifacts in CT with application to revision total hip surgery
Author(s):
Alan David Kalvin;
Bill Williamson
Show Abstract
X-ray CT images of objects containing metal are often corrupted by 'blooming' and streaking artifacts that radiate from the regions of the image where the metal is present. The best strategy for reducing these artifacts is to modify the noisy projection data before reconstructing the image. Unfortunately this approach does not lead to a very practical solution, since it assumes that one has access to the unencoded projection data. In reality, these data are virtually impossible to get, even for isolated research studies, let alone for routine clinical use. Our goal is to produce a practical, clinically useful system of reducing metal artifacts in real patient CT scans on a regular basis. So we need a method that is not dependent on sinogram data. However, it is extremely difficult, if not impossible, to reduce metal artifacts using just the information than exists in the noisy CT image itself. As we show, crucial image details may be completely erased by the artifacts. Therefore we have developed a new approach for suppressing these reconstruction artifacts by using scout images. We have applied this method successfully to real CT data.
Automatically determining the dimensions of digitally stored images
Author(s):
Michael G. Evanoff;
William J. Dallas;
Melissa J. Bjelland
Show Abstract
We frequently encounter digitally stored images whose formatting information, the number of lines and pixels, has been lost. Without the formatting information the image becomes practically inaccessible. Formatting loss can be caused by: (1) Legacy images - they come somewhere from the dim past; (2) Image acquisition devices that do not store the dimensions; (3) The proliferation of storage standards using headers that require software for new formats. In order to use the image, the dimensions must be recovered. We developed a robust method that determines the width and height of images stored in lexicographic order. We constructed an approximately periodic function from contiguous image data. Similar to a periodic function, the auto-correlation function of the contiguous data exhibits peaks spaced with a period that is equal to the width of the images. The height is determined by dividing the file's size by the width. We tested the algorithm on 42 medical images and one aerial photo. We created a larger test base by cropping regions of different sizes from the images and sub- sampling the images into several sizes. The algorithm found the correct dimensions in all cases except one - when the region consisted of periodic data. In this case, the auto- correlation function has peaks due to the periodicity of the data that cannot be discerned form the periodicity of the line lengths since all the peaks of the auto-correlation function are equal. The algorithm cannot discern the correct width among the ambiguous peaks. In practice, periodicity will never happen in real medical images.
MRI image segmentation using multiscale autoregressive model and 3D Markov random fields
Author(s):
Pierre Martin Tardif;
Andre Zaccarin
Show Abstract
Texture segmentation applied to magnetic resonance image (MRI) is investigated using a multiscale autoregressive model (M-AR). Since M-AR models need large region for good parameter estimation, a mixture model using M-AR and constant gray level value is developed. Region uniformity is obtained using a 3D Markov random field. The segmentation is given by its maximum a posteriori estimate. The segmentation is computed using iterated conditional modes. Two initial segmentation choices are studied: MLE segmentation with multiple resolution segmentation and human atlas. Human atlas initial segmentation proves to be closer to desired segmentation, even if the image from the atlas is not precise.
Motion-compensated temporal filtering of B-mode echocardiograms
Author(s):
Matthias Kroemer;
Robert A. Close;
Julius M. Gardin;
Dietrich Meyer-Ebrecht;
Jack Sklansky
Show Abstract
The purpose of this work is to evaluate the potential of motion-compensated temporal filtering (MCTF) for reduction of noise in B-mode echocardiograms. MCTF is expected to reduce noise with less blurring of the signal than would be obtained from spatial filtering or direct averaging of sequential frames. We perform motion estimation using the assumption of constant brightness for moving features in the image sequences.We transform the images using the assumption of motion-compensated sequence. We then perform direct averaging of a variable number of frames. Filtered images are obtained using direct averaging, MCTF using estimated motion, MCTF using known motion, and a biased 2D least mean squares filter (TDLMS). We compare signal-to-noise ratios (SNR) for each of these methods. Degradation of signal accuracy by blurring is evaluated independently by computing the correlation coefficient between the original and filtered signals. The signal-to-noise ratios and signal accuracy obtained from MCTF are consistently better than obtained from direct averaging of the images. The application of biased TDLMS to the same frames yields even higher SNR in many cases but may also increase the signal blurring.
Color measurement of the mucous membrane using sequential endoscopic images
Author(s):
Shin-ichiroh Kitoh;
Takashi Obi;
Masahiro Yamaguchi;
Nagaaki Ohyama
Show Abstract
In order to give medical doctors objective information of the internal surface of the organs, which will lead to an improvement of the quality of the endoscopic diagnosis, it is necessary to develop a method to quantitatively measure colors from the digital images of CCD endoscope. However, when the internal surface of the human digestive organs is illuminated by the light of the endoscope, images captured by the endoscope become reddish compared to the original object due to the reflections by the surrounding red mucosal surface. This phenomenon makes it difficult to measure the object color correctly. We propose a method to compensate the color shift of illumination and to estimate the color of the mucous internal surface from CCD endoscopic images. In this method, the spectral reflectance of the internal surface and the spectral distribution of the illumination are represented by a weighted sum of some functions given by the statistical analysis of the surface reflectances. Each weighting factor is estimated from sequential images captured by the CCD endoscope. The effectiveness of the proposed method is confirmed by a basic experiment using a CCD endoscope and color charts.
Spatial distribution of residual error in 3D image coregistration: an experimental study
Author(s):
Edward J. Kuchinski;
Henry Rusinek;
Wai-Hon Tsui;
Mony J. de Leon
Show Abstract
This paper analyzes the spatial distribution of residual error resulting from resulting from 3D image coregistration algorithms which use rigid body transformations. The large number of applications in diagnostic and surgical imaging are increasingly using coregistration to follow subtle changes in size and shape of anatomical structures and lesions. This adoption of coregistration techniques requires a better understanding of their physical properties. Our study involved applying a known 3D transformation to digital phantoms with special identifiable markers on a lattice. The resulting transformed image was then processed to uniquely identify the special markers in both the original image and the transformed image so that the error distribution could be computed. The identification of landmarks was made possible by reformatting the 3D image without performing gray level interpolation. Since the special markers could each be uniquely identified in the 3D space, error distances could also be measured throughout the image and expressed as a function of the distance from the center of the image. The results provide us with empirical evidence that errors within the interior of a 3D image volume are on the average smaller than errors measured at the surface. The empirical results clarify the properties of the spatial distribution of registration errors and hopefully can be used to guide future studies that evaluate the accuracy of image registration techniques.
Heart range determination for SPECT myocardial perfusion studies
Author(s):
Jin-Shin Chou;
Beilei Xu;
JianZhong Qian;
Raymond P. DeVito
Show Abstract
We have developed a new algorithm for determining the cross- section range of the heart from a sequence of SPECT projection images. The new algorithm provides accurate estimation of the heart range for a fully automatic myocardial perfusion SPECT processing system. The limits of the heart range are used for reconstructing transverse images for the subsequent analysis. The basis of the approach is the 1D pseudo motion analysis which has three major components, spatial feature to position mapping, knowledge-driven analysis of heart region, and heart range determination. The main advantage of the algorithm is that the processing is fully automatic regarding no user intervention and is less sensitive to the image intensity distribution comparing to other existing methods.
Automatic mosaic and display from a sequence of peripheral angiographic images
Author(s):
Jin-Shin Chou;
JianZhong Qian;
Zhenyu Wu;
Helmut F. Schramm
Show Abstract
We have developed a new algorithm that is capable of combining a sequence of peripheral angiographic images into a long-leg display automatically. In peripheral angiography, the field-of-view of the scanner cannot cover the entire peripheral region in one image. Instead, the peripheral region is divided into subregions and scanner steps to each of the location and acquires images. The adjacent stepping images overlap partially with each other during the image acquisition. Therefore, the problem for reconstructing a long-leg image display is transformed into finding the best image-matching in each adjacent image pair. The new algorithm solves this matching problem by maximizing the 'overlapping ratio' for global matching in each designate image pair. The 'overlapping ratio' is defined as the degree of feature agreement, in the overlapping area calculated from anatomical information of bone and vessels, between two adjacent image pairs. The experimental results indicate that he new approach is robust and generates accurate and reliable image matching. In addition, each leg is fine-tuned to determine the individual matching parameters to compensate the possible leg motion. Based on the matching parameters, we generate long-leg image display in both reduced and full resolutions so that cross reference of region-of-interest can be done interactively.
Detection and measurement of tubulitis in renal allograft rejection
Author(s):
John B. Hiller;
Qi Chen;
Jesse S. Jin;
Yung Wang;
James L. C. Yong
Show Abstract
Tubulitis is one of the most reliable signs of acute renal allograft rejection. It occurs when mononuclear cells are localized between the lining tubular epithelial cells with or without disruption of the tubular basement membrane. It has been found that tubulitis takes place predominantly in the regions of the distal convoluted tubules and the cortical collecting system. The image processing tasks are to find the tubule boundaries and to find the relative location of the lymphocytes and epithelial cells and tubule boundaries. The requirement for accuracy applies to determining the relative locations of the lymphocytes and the tubule boundaries. This paper will show how the different sizes and grey values of the lymphocytes and epithelial cells simplify their identification and location. Difficulties in finding the tubule boundaries image processing will be illustrated. It will be shown how proximate location of epithelial cells and the tubule boundary leads to distortion in determination of the calculated boundary. However, in tubulitis the lymphocytes and the tubule boundaries are proximate.In these cases the tubule boundary is adequately resolved and the image processing is satisfactory to determining relativity in location. An adaptive non-linear anisotropic diffusion process is presented for image filtering and segmentation. Multi-layer analysis is used to extract lymphocytes and tubulitis from images. This paper will discuss grading of tissue using the Banff system. The ability to use computer to use computer processing will be argued as obviating problems of reproducability of values for this classification. This paper will also feature discussion of alternative approaches to image processing and provide an assessment of their capability for improving the identification of the tubule boundaries.
Algorithm to reduce the complexity of local statistics computation for PET images
Author(s):
Chung-Chieh Jack Huang;
Xiaoli Yu;
J. Zeng;
James R. Bading;
Peter S. Conti
Show Abstract
The evaluation of the local statistical noise in a region of interest (ROI) of reconstructed positron emission tomography (PET) images is necessary for quantitative activity studies. Huesman provided an exact but highly complicated way to calculate covariances of ROIs in PET images. To reduce the computational complexity in Huesman's method, various approximate formulae for covariance estimation have been developed, but these techniques have limited accuracies. We propose a method which accelerates the covariance calculation and also secures the accuracy. This method exploits the circulant property of the coefficient vector of the convolution filter used in filtered backprojection (FBP). The covariance calculation is significantly accelerated by using a table look-up followed by multiplications with the corrected projection data. Results show that, for equal-weighted linear interpolation FBP, the number of computation required for this new covariance computation is about half of that of Huesman's method.
Effects of sample size on classifier design: quadratic and neural network classifiers
Author(s):
Heang-Ping Chan;
Berkman Sahiner;
Robert F. Wagner;
Nicholas Petrick;
Joseph T. Mossoba
Show Abstract
Classifier design is one of the important steps in the development of computer-aided diagnosis (CAD) programs. In this study, we performed simulation studies to evaluate the dependence of the classifier performance on the design sample size, feature space dimensionality, and classifier complexity. The performance of a classifier is quantified by the area (Az) under the receiver operating characteristic (ROC) curve. Two types of non-linear classifiers, the quadratic discriminants and the backpropagation neural networks, were examined and their performances were compared to those of the linear discriminant classifiers under similar input conditions. A feature space with multivariate normal distributions for the two classes of feature vectors was assumed. A finite sample (Nt) of the normal and abnormal classes was randomly drawn form the populations. A modified cross-validation resampling scheme was used to design the classifiers. By randomly partitioning the available sample set into a training and a test set, a classifier was trained with the design samples and its performance was evaluated by the resubstitution technique and also by testing with the independent test set. For a finite design sample size, it was found that the classifier performance was biased optimistically by resubstitution and pessimistically by testing with the independent set. When the design sample set is sufficiently large, the Az-versus-1/Nt relationship is approximately linear. The range of Nt in which the linear approximation holds depends on the classifier, the dimensionality of the feature space, and the feature distributions. We analyzed the Az-versus-1/Nt relationship under a variety of input conditions. The study provides useful information for the design of classifiers in the development of CAD algorithms and other classification problems.
Detection of skin line in computed radiographs for improved tone scale
Author(s):
Robert A. Senn;
Lori L. Barski
Show Abstract
An algorithm for the detection of the skin-line transitions in computed radiographic imagery is presented. Knowledge of the gray-level values associated with the skin-line transition can be utilized by a tone-scaling algorithm to prevent the loss of visibility of the skin line. Features associated with the line profile of the significant transitions are used to identify the kin-line transition in digital radiographs. A Gaussian maximum likelihood classifier is used to separate the skin-line transitions from those associated with the background-foreground, background-hardware, and other significant transitions. Results are presented for a set of operational computed radiography exams.
Fast fuzzy segmentation of magnetic resonance images: a prerequisite for real-time rendering
Author(s):
Norman R. Smith;
Richard I. Kitney
Show Abstract
In order to obtain a meaningful 3D rendered image from Magnetic Resonance Image (MRI) data, it is first necessary to classify each voxel in the data set according to its corresponding tissue type. Existing techniques require long processing times and often need expert interaction. This paper describes a new method for automatic and real-time fuzzy segmentation. A histogram of reduced resolution grey scale data is first generated and used as input to a simplified version of the Fuzzy c-Means (FCM) algorithm. A new color blending scheme is proposed to allow the classified data to be displayed. When processing a 3D MRI data set, the original FCM algorithm took over 5 hours, whereas the new method took less than one second. Furthermore, the resulting images from both the original and the new methods were indistinguishable. Assessment of the results by an expert radiologist showed that the segmented structures corresponded very accurately with the actual anatomy. In addition, the color blended display enabled poorly defined boundaries and structures to be clearly identified.
Correction method for shift-variant characteristics of the SPECT measurement system
Author(s):
Masahiro Mimura;
Takashi Obi;
Masahiro Yamaguchi;
Nagaaki Ohyama
Show Abstract
SPECT imaging system has shift-variant characteristics due to nonuniform attenuation of gamma-ray, collimator design, scattered photons, etc. In order to provide quantitatively accurate SPECT images, these shift-variant characteristics should be compensated in reconstruction. This paper presents a method to correct the shift-variant characteristics based on a continuous-discrete mapping model. In the proposed method, the projection data are modified using sensitivity functions so that filtered backprojection (FBP) method can be applied. Since the projection data are assumed to be acquired by narrow ray sum beams in the FBP method, narrow ray sum beams are approximated by a weighted sum of sensitivity functions of the measurement system, then the actual projection data are corrected by the weighting factors. Finally, FBP method is applied to the corrected projection data and a SPECT image is reconstructed. Since the proposed method requires the inversion of smaller matrices than the conventional algebraic methods, the amounts of calculation and memory space become smaller, and the stability of the calculation is greatly improved as well. The results of the numerical simulations are also demonstrated.
Uncertainties in tomographic reconstructions based on deformable models
Author(s):
Kenneth M. Hanson;
Gregory S. Cunningham;
Robert J. McKee
Show Abstract
Deformable geometric models fit very naturally into the context of Bayesian analysis. The prior probability of boundary shapes is taken to proportional to the negative exponential of the deformation energy used to control the boundary. This probabilistic interpretation is demonstrated using a Markov-Chain Monte-Carlo (MCMC) technique, which permits one to generate configurations that populate the prior. One of may uses for deformable models is to solve ill-posed tomographic reconstruction problems, which we demonstrate by reconstructing a two-dimensional object from two orthogonal noisy projections. We show how MCMC samples drawn from the posterior can be used to estimate uncertainties in the location of the edge of the reconstructed object.
Parameter estimation in deformable models using Markov chain Monte Carlo
Author(s):
Vikram Chalana;
David R. Haynor;
Paul D. Sampson;
Yongmin Kim
Show Abstract
Deformable models have gained much popularity recently for many applications in medical imaging, such as image segmentation, image reconstruction, and image registration. Such models are very powerful because various kinds of information can be integrated together in an elegant statistical framework. Each such piece of information is typically associated with a user-defined parameter. The values of these parameters can have a significant effect on the results generated using these models. Despite the popularity of deformable models for various applications, not much attention has been paid to the estimation of these parameters. In this paper we describe systematic methods for the automatic estimation of these deformable model parameters. These methods are derived by posing the deformable models as a Bayesian inference problem. Our parameter estimation methods use Markov chain Monte Carlo methods for generating samples from highly complex probability distributions.
Automatic brain segmentation and validation: image-based versus atlas-based deformable models
Author(s):
Georges B. Aboutanos;
Benoit M. Dawant
Show Abstract
Due to the complexity of the brain surface, there is at present no segmentation method that proves to work automatically and consistently on any 3-D magnetic resonance (MR) images of the head. There is a definite lack of validation studies related to automatic brain extraction. In this work we present an image-base automatic method for brain segmentation and use its results as an input to a deformable model method which we call image-based deformable model. Combining image-based methods with a deformable model can lead to a robust segmentation method without requiring registration of the image volumes into a standardized space, the automation of which remains challenging for pathological cases. We validate our segmentation results on 3-D MP-RAGE (magnetization-prepared rapid gradient-echo) volumes for the image model prior- and post-deformation and compare it to an atlas model prior- and post-deformation. Our validation is based on volume measurement comparison to manually segmented data. Our analysis shows that the improvement afforded by the deformable model methods are statistically significant, however there are no significant differences between the image-based and atlas-based deformable model methods.