Perception and representation of medical images
Author(s):
Harold L. Kundel
Show Abstract
A model for image perception is described in which the percept is first constructed from visual primitives and then deconstructed into diagnostic features that are suitable for symbolic manipulation. The relationship of the model to the psychophysics of medical imaging is discussed in terms of the relationship between image appearance and the observer response. Some methods for modifying the image to aid visual perception are discussed. They include image enhancement, computer image analysis, and feedback assisted visual search. Some of the perceptual aspects of the search component of the detection-location task are described. Finally, the implications of the model are presented for those who would assist image perception or utilize visual feature extraction.
Fast surface-fitting algorithm for 3D image registration
Author(s):
Wai-Hon Tsui;
Henry Rusinek;
Peter Van Gelder;
Sergey Lebedev
Show Abstract
The surface-fitting method is one of the most promising techniques for 3-D image registration. It accurately registers two tomographic scans by matching the surface of a common object appearing on both scans. However, the goodness-of-fit measure used in the original algorithm is difficult to compute. We have developed a more efficient method that measures distance between surfaces in 2-D along the tomographic planes. Through a series of simulation studies, we demonstrate that the new method does not compromise the accuracy of the original algorithm.
Feature-based image registration for digital subtraction angiography
Author(s):
Ping Hua;
Isaac Fram
Show Abstract
Digital subtraction is a powerful technique for enhancing vascular features in digital angiography. However, it requires accurate estimation of the relative motion between a contrast and a mask image. We have developed an algorithm that uses the edge features in the contrast and mask images to estimate the relative displacement. The edge detection method and the use of deterministic sign change (DSC) as a similarity measure are discussed. We use a synthesized image and clinical images to demonstrate that the edge-based method is more robust and accurate than those using the native images. The algorithm is efficient in computation and can be easily implemented in hardware for real-time applications.
Surface-based registration of 3D medical images
Author(s):
Andre M. F. Collignon;
Dirk Vandermeulen;
Paul Suetens;
Guy Marchal
Show Abstract
This paper gives an overview of existing surface based registration methods. Properties of commonly used classification schemes for registration algorithms are divided into two groups: external attributes are used to distinguish surface based from non-surface based registration algorithms, and internal attributes serve to distinguish surface based registration algorithms from each other. An overall comparison is performed based on a quality constrained cost analysis and the limits of the algorithms' applicability for neurosurgical therapy planning systems are investigated.
Estimation of accuracy in localizing externally attached markers in multimodal volume head images
Author(s):
Calvin R. Maurer Jr.;
Jennifer J. McCrory;
J. Michael Fitzpatrick
Show Abstract
We recently examined the registration of multimodal volume head images using extrinsic markers. We analyzed the problem into the three separate problems of finding the positions of at least three non-collinear markers in the two images, matching corresponding markers in the two image spaces, and estimating the translation and rotation parameters of the rigid body transformation that maps one space into the other. We call the calculation of the geometric centers fiducial localization. Knowing the fiducial localization error is important since it is an important determinant of target registration error, and because it provides a useful way to compare localization algorithms. In the present work we present a technique for experimentally estimating the error associated with localizing a fiducial in a tomographic image. Our method involves acquiring a volume image of a large number of markers with precisely known positions in the physical space of a phantom containing the markers, localizing the fiducials in the image, and registering the localized fiducial positions in image space to the precisely known positions in physical space. Fiducial localization error is estimated from the fiducial registration error, i.e. the distance between corresponding fiducial positions after registration.
Multiresolution registration for volume reconstruction in microscopical applications
Author(s):
Franco Fontana;
Andrea Crovetto;
Mario Bergognoni;
Anna Maria Casali
Show Abstract
This paper describes a software method for the analysis of 3D histological volumes from microscopical series of consecutive sections. Image segmentation and geometrical registration are the processes that must be performed for the present approach. The former is a necessary prerequisite for finding correspondences among the various slices of a single object; the latter deals with the actual determination of geometrical parameters that enable one to correctly stack the sections to reconstruct the original 3D volume. The same spatial sequence is acquired at different resolution to better detect any internal structure. Once the sections have been registered, it is possible to follow the topological evolution of the microstructures inside the space of the tissue sample.
A novel method for three-dimensional reconstruction of coronary arterial trees from biplane angiograms
Author(s):
Temel Kayikcioglu;
Sunanda Mitra
Show Abstract
A novel method of visualization of three-dimensional (3-D) arterial trees from the arterial cross-sections estimated from biplane angiograms is described. For computer generated data, the cross-sectional areas can be precisely computed using an elliptical model. However two views such as represented in biplane angiograms result in two ellipses of different eccentricities having the same area. Therefore the 3-D reconstructed coronary arteries can be visualized in two somewhat different shapes. At least three views are needed for unique 3-D reconstruction of the coronary artery when an elliptical model is used. A parametric model for observed intensity distribution of an arterial cross-section is developed and used to estimate not only transverse areas but parameters of the ellipse which are essential in 3-D reconstruction of an arterial segment. This model takes into account background intensity, noise and blurring. The performance of this model has been tested on computer-generated and actual data and compared to those of other methods for estimating arterial dimensions. Furthermore, 3-D reconstruction of arterial segments using elliptical model from computer- generated biplane angiogram data has shown excellent results. The accuracy of the computed cross-sectional area of coronary arterial trees and their 3-D visualization should provide better information for diagnostic decision and management of cardiac patients.
Surface construction and contour generation from volume data
Author(s):
Takanori Nagae;
Takeshi Agui;
Hiroshi Nagahashi
Show Abstract
New surface construction algorithms are presented. The idea is based on Marching Cubes algorithm and the aim of the modification proposed in the present article is to provide topologically appropriate solutions that guarantee to produce triangulated closed surfaces as the equiscalar surface within a volume. There are roughly three variations of modifications; 1- connective, (root)2-connective and Adaptive Marching Cubes. Surface construction algorithms are also applicable to slice-by-slice solid modeling, such as laser stereolithography. An algorithm for obtaining contours on an intermediate slice interpolated between two slices is shown.
Estimation of surface shape from endoscopic image sequence
Author(s):
Norihiko Oda;
T. Nonami;
Masahiro Yamaguchi;
Nagaaki Ohyama;
Toshio Honda
Show Abstract
In this paper, an approach is presented for 3-D surface shape estimation from monocular endoscopic images. The algorithms uses image sequence that are taken by shifting camera. In the first stage, the camera movements are estimated and the surface shape is roughly estimated by a method that is similar to stereo vision. In the second stage, the surface shape is refined by a multiview method. This multiview system selects several images from the image sequence and determines the corresponding view positions. Since each view contains noise, the algorithm solves for the depth distribution by minimizing an energy function such that all regions of several images are matched as close as possible in a least square sense while a smoothness constraint is applied on the solution. The contribution ratio of each view to the estimation is controlled by a weighting function that is decided depending on the geometric relationship between the surface point and each viewpoint. Principle of the method and experimental results are presented.
Optical flow interpolation of serial slice images
Author(s):
Winston L. Williams;
William A. Barrett
Show Abstract
Optical flow has been used for matching or tracking of individual image objects through a time sequence of images and is applied to the problem of image interpolation by treating the serial slice images as a spatial sequence. Calculation of optical flow between two images results in a 'velocity vector map' indicating the relative displacement between similar structures in both images. Thus, individual vectors are used to match points in adjoining images and interpolate between their corresponding intensities. The interpolated intensity is then redistributed and normalized by unit area in the interpolated slice. The accuracy of optical flow interpolation is evaluated by generating an interpolated slice, i', from original slices i-l and i+l and then comparing i' with the original middle scanned slice, i. Optical flow interpolation compares favorably to linear interpolation both visually and quantitatively. Quantitative comparison, i-i', shows a 253% improvement over linear interpolation for synthetic binary images and a 13% improvement over linear interpolation for grayscale CT images.
Shape-based interpolation of gray-scale serial slice images
Author(s):
Russell R. Stringham;
William A. Barrett
Show Abstract
A new algorithm for interpolation between grayscale serial slice images is presented. The new algorithm (SBIG) applies shape-based interpolation to CT and MRI images and outputs a dense image volume. Unlike algorithms, such as linear interpolation, which rely only on pixel position, SBIG makes essential use of image content to interpolate between pixels and structure of similar shape and intensity which may differ in size and position from slice to slice. A quantitative comparison shows that SBIG significantly outperforms linear interpolation, particularly as the distance between slices increases. More importantly, while linear interpolation demonstrates characteristic low-pass smearing of object edges and detail, high frequency image features such as object edges and anatomical structures are preserved and well approximated with SBIG. As a result, reconstructed coronal slices from the dense image volume using SBIG demonstrate significantly smoother representation of anatomical structures and less 'staircasing' than those created using linear interpolation.
Biomagnetic imaging of three-dimensional current distributions
Author(s):
Ceon Ramon;
Michael G. Meyer;
Lee L. Huntsman
Show Abstract
Biomagnetic reconstruction of a current distribution located in a three-dimensional space from the sampled magnetic field was performed. Simulations were performed on a circular current distribution. The magnetic field was sampled in a planar surface using the Biot-Savart Law. A cubic volume space was selected as a reconstruction space. It was divided into cubic voxels. Reconstruction was performed to compute the current density in each voxel by applying pseudo-inverse technique. The initial reconstruction showed a rough image of the current buried in the noise. The image was enhanced by thresholding of background noise. The final image showed a good resemblance to the original shape.
Robust registration in case of different scaling
Author(s):
Georgi J. Gluhchev;
Shlomo Shalev
Show Abstract
The problem of robust registration in the case of anisotropic scaling has been investigated. Registration of two images using corresponding sets of fiducial points is sensitive to inaccuracies in point placement due to poor image quality or non-rigid distortions, including possible out-of-plane rotations. An approach aimed at the detection of the most unreliable points has been developed. It is based on the a priori knowledge of the sequential ordering of rotation and scaling. A measure of guilt derived from the anomalous geometric relationships is introduced. A heuristic decision rule allowing for deletion of the most guilty points is proposed. The approach allows for more precise evaluation of the translation vector. It has been tested on phantom images with known parameters and has shown satisfactory results.
Pseudo-correlation: a fast, robust, absolute, gray-level image alignment algorithm
Author(s):
Thomas J. Radcliffe;
Rasika Rajapakshe;
Shlomo Shalev
Show Abstract
A new image alignment algorithm--pseudo-correlation--has been developed based on the application of Monte Carlo techniques to the calculation of a cross-correlation integral for grey-scale images. It has many advantages over cross-correlation: it is at least a factor of ten faster than FFT-based cross-correlation, and requires eight times less memory. Its high speed allows for the search space of geometric transformations between images to include magnification and rotation as well as translations without the search time becoming too long. It allows noise to be taken into account, making calculation of a robust, absolute probability of good alignment possible. It is relatively insensitive to differences in quality between images. This paper describes the pseudo-correlation algorithm and presents the results of tests of the effects of contrast enhancement and noise on the algorithm's performance. These tests show that the algorithm is well-suited to the task of automated alignment of very low contrast images from video electronic portal imaging devices (VEPIDs).
Computer-aided techniques for aligning interleaved sets of nonidentical medical images
Author(s):
Evan D. Morris;
Gary James Muswick;
Edward S. Ellert;
Robert N. Steagall;
Peter F. Goyer;
William E. Semple
Show Abstract
We have developed an X-window based, interactive manual technique for aligning medical images of the brain. Our methods were designed to allow easy correction of artifacts that resulted from motion during the acquisition of interleaved sets of MR images. Real-time feedback about the alignment of the data volume proved helpful to the user in obtaining a satisfactory correction. This feedback was possible by focussing on a limited number of slices at one time. Contrary to intuition, the observed motion artifact was primarily found to occur in one direction. Elimination of said artifact, however, required sub-pixel translation of images. We also did some preliminary work on automated extensions of our manual alignment technique. These automated algorithms utilized mathematical morphology for segmenting the brain and a 2-dimensional implementation of the Principal Axes technique for re-alignment of the segmented images.
Pattern classification approach to segmentation of chest radiographs
Author(s):
Michael F. McNitt-Gray;
James W. Sayre;
H. K. Huang;
Mahmood Razavi M.D.
Show Abstract
In digital chest radiography, the goal of segmentation is to automatically and reliably identify anatomic regions such as the heart and lungs. Aids to diagnosis such as automated anatomic measurements, methods that enhance display of specific regions, and methods that search for disease processes, all depend on a reliable segmentation method. The goal of this research is to develop a segmentation method based on a pattern classification approach. A set of 17 chest images was used to train each of the classifiers. The trained classifiers were then tested of a different set of 16 chest images. The linear discriminant correctly classified greater than 70%, the k-nearest neighbor correctly classified greater than 70% and the neural network classified greater than 76% of the pixels from the test images. Preliminary results are favorable for this approach. Local features do provide much information, but further improvement is expected when addition information, such as location, can be incorporated.
Interactive segmentation of brain tumors in MR images using 3D region growing
Author(s):
James E. Cabral Jr.;
Keith S. White M.D.;
Yongmin Kim;
Eric L. Effmann
Show Abstract
We have developed an interactive software package designed to assist the radiologist in tissue segmentation for volumetric determination. Our algorithms treat a group of stacked images as a single volume, extending the concepts of traditional region growing and thresholding into three dimensions. Preliminary results from phantom studies indicate that, relative to conventional manual tracking techniques, the benefits of 3D region growing and thresholding include reduced time expenditure and improved determination of contiguous regions. However, heterogeneous signals and poor contrast may limit the usefulness of the automated segmentation algorithm alone. Therefore, several manual editing tools are included. When the automated segmentation tools are coupled with simple editing tools, accuracy of tissue segmentation in the phantom is equal or better than manual tracing alone with significant time savings.
Direct segmentation in 3D and its application to medical images
Author(s):
Xiaohan Yu;
Juha Yla-Jaaski;
Outi Sipila;
Toivo E. Katila
Show Abstract
In this paper a new algorithm for direct 3D segmentation is presented. The algorithm combines region growing, edge detection and a new edge preserving smoothing method. The combined method helps to avoid characteristic segmentation errors which occur when using region growing or edge detection separately. Boundary modification is also introduced to overcome noise in the boundary location and ensure more realistic region boundaries. The algorithm has been validated through application to medical image visualization.
Directional adaptive deformable models for segmentation with application to 2D and 3D medical images
Author(s):
Nicolas F. Rougon;
Francoise J. Preteux
Show Abstract
In this paper, we address the problem of adapting the functions controlling the material properties of 2D snakes, and show how introducing oriented smoothness constraints results in a novel class of active contour models for segmentation which extends standard isotropic inhomogeneous membrane/thin-plate stabilizers. These constraints, expressed as adaptive L2 matrix norms, are defined by two 2nd-order symmetric and positive definite tensors which are invariant with respect to rigid motions in the image plane. These tensors, equivalent to directional adaptive stretching and bending densities, are quadratic with respect to 1st- and 2nd-order derivatives of the image intensity, respectively. A representation theorem specifying their canonical form is established and a geometrical interpretation of their effects if developed. Within this framework, it is shown that, by achieving a directional control of regularization, such non-isotropic constraints consistently relate the differential properties (metric and curvature) of the deformable model with those of the underlying intensity surface, yielding a satisfying preservation of image contour characteristics.
Model-based segmentation of the brain from 3D MRI using active surfaces
Author(s):
John W. Snell;
Michael B. Merickel;
John C. Goble;
James R. Brookeman;
Neal F. Kassell M.D.
Show Abstract
Traditional, bottom-up segmentation approaches have proven inadequate when faced with the anatomical complexity and variability exhibited by biological structures such as the brain. A 3- D extension to the 'snakes' algorithm has been implemented and used to segment the skin and brain surfaces from MRI image volumes of the head in an effort to investigate model-based, top-down segmentation strategies. These active surfaces allow closed surfaces of complex objects to be recovered using a prior knowledge in the form of initial conditions and applied external 'forces'. Preliminary results suggest that active surfaces may be initialized according to a preconceived model and adaptively deformed by image data to recover the desired object surface.
Segmentation of dual-echo MR images using neural networks
Author(s):
Jin-Shin Chou;
Chin-Tu Chen;
Wei-Chung Lin
Show Abstract
We have integrated Kohonen's self-organizing feature maps with the idea of fuzzy sets and applied this model to the problem of dual-echo MR image segmentation. In the proposed method, a Kohonen network provides the basic structure and update rule, whereas fuzzy membership values control the learning rate. The calculation of learning rate is based on a fuzzy clustering algorithm. In the experiments, spatially registered T2-weighted and proton density MR data are used as input images. Every input image is first converted to a 1-D vector and two such vectors from two images are then combined to form a 2-D matrix. The initial weights are then fed into the model to start the iterative process. The process terminates when the stopping criteria is met. The major strength of the proposed approach is its stability and unsupervised nature. The experimental results show that the speed of convergence is faster than that of the fuzzy clustering method and the conventional region-based segmentation methods.
Cardiac MR image segmentation using deformable models
Author(s):
Ajit Singh;
Lorenz von Kurowski;
Ming-Yee Chiu
Show Abstract
We describe a deformable model based technique for Cardiac MRI segmentation. The technique assumes that the data is available in the form of 2-D slices of the heart. An initial approximation of the boundary of the object of interest, say, the left ventricle, is specified in one of the slices, via a user-interface. The initial contour deforms to a contour with minimum energy, which is defined to be the correct ventricular boundary. This contour is then propagated to other slices, both in space and in time, to get the segmented volume at various instants in the cardiac cycle. This work is a part of our ongoing effort on cardiac MR analysis. The segmentation algorithm discussed here is intended to be a preprocessing stage for our work on volume computation and cardiac wall motion analysis. We have tested the segmentation algorithm extensively on over 500 images, and our clinical collaborators have found the results to acceptable, both qualitatively, and quantitatively. Our system is being installed for use in routine clinical practice.
Multiseed interactive segmentation for 3D visualization using different perspective angles for histological structures
Author(s):
Franco Fontana;
Paolo Virgili;
Gianni Vottero;
Anna Maria Casali
Show Abstract
This paper presents an interactive algorithm able to segment several single-connected objects and to yield various alternative results, so enabling the user to select the best one in accordance with his knowledge and requirements. Moreover, we also describe a way to reconstruct and visualize the 3D volume of a segmented structure, using different perspective angles. Once the object has been reconstructed by superimposing the contours (extracted by the segmentation phase), different views of the object can be obtained rotating the original plane and creating a perspective effect that allows one to better interpret the evolution in the space of the object of interest. Results are reported applying the system to microscopical images of histological structures.
Segmentation schemes for knowledge-based construction of individual atlases from slice-type medical images
Author(s):
Jeffrey Stanier;
Isabelle Bloch;
Morris Goldberg
Show Abstract
To produce an individual atlas from a set of slice-type medical images it is necessary to segment and label the structures contained in the images. Although this problem can be solved directly in a top-down fashion the volume of data makes a direct top-down approach difficult. Instead, a data-driven segmentation can be used to produce an intermediate data structure which is more easily searched using a knowledge-driven technique. Of the many methods used for 2-D and 3-D image segmentation only some produce an output which can be directly utilized by a top-down search technique. The segmentation should allow for data abstraction so decisions can be made quickly when comparing regions. The data structure of the segmentation should also allow for easy merging and splitting of the volumes of interest (VOIs) as the search for the best match to the model is performed. Lastly, the data structure should allow for display of the 3-D structure of the atlas and the data so an efficient user interface can be built. Two segmentation schemes are presented: one which uses a region growing approach to generate VOIs and another which uses a gradient-based segmentation approach.
Segmentation and display of hepatic vessels and metastases
Author(s):
Kenneth R. Hoffmann;
Shiuh-Yung James Chen;
Martti Kormano M.D.;
Richard A. Coulden M.D.
Show Abstract
In order to visualize the spatial relationships between three-dimensional structures within the liver, we have developed a method for segmentation of the liver and employed non-linear projection techniques to display the hepatic structures, in particular the hepatic vessels and metastatic tumors, from a number of angles. Ultrafast computed tomography was employed to obtained cross-sectional images of the liver as contrast material flowed through the hepatic vessels. The technique for segmentation of the liver is semi-automated, based on histogram analysis and region growing, and allows manual correction of the boundaries of the liver. The pixels composing the liver are defined to lie within the closed contour of the determined boundary. Projections of the vessels were obtained using a maximum-intensity-projection technique and an integration technique which employed a lower-bound threshold. Projections of metastases were obtained using a minimum-intensity-projection technique and an integration technique which employed an upper-bound threshold. Views of the structures were then generated at a number of angles relative to the sagittal plane so as to assist in visualization of the three-dimensional spatial relationships.
Computation of motion using generalized moment transformations
Author(s):
Robert A. Close;
Shinichi Tamura;
Hiroaki Naito;
Koushi Harada;
Takahiro Kozuka
Show Abstract
The intensity of medical images often represents a quantity which is conserved during motion. Hence the motion which occurs between sequential images can be viewed as a coordinate transformation. If edge effects can be neglected, the form of the transformation can be determined from the generalized moments of the two images. The equations which transform arbitrary generalized moments from an initial image to a target image are expressed as a function of the displacement field. The apparent displacement field or optical flow is then computed by the method of convex projections, utilizing the functional derivatives of the linearized moment equations. Smoothness is ensured by using sinusoidal moments and building up the solution from low to high spatial frequencies. The technique is demonstrated using simple examples and actual medical images. It is expected that this method will be useful for analysis of heart motion and blood flow.
Aspects of computer vision in surgical endoscopy
Author(s):
Vincent Rodin;
Alain Ayache;
N. Berreni
Show Abstract
This work is related to a project of medical robotics applied to surgical endoscopy, led in collaboration with Doctor Berreni from the Saint Roch nursing-home in Perpignan (France). After taking what Doctor Berreni advises, two aspects of endoscopic color image processing have been brought out: (1) The help to the diagnosis by the automatic detection of the sick areas after a learning phase. (2) The 3D reconstruction of the analyzed cavity by using a zoom.
Fast algorithm for radiation field edge detection
Author(s):
Georgi J. Gluhchev;
Shlomo Shalev
Show Abstract
The efficacy of radiation treatment depends strongly on the accuracy of the treatment field set up. On-line portal imaging provides the possibility of detecting and correcting field placement errors before a significant radiation dose is delivered. A heuristic algorithm is described for fast field contour delineation based on the properties of the portal images, and its accuracy and reproducibility are tested on a number of artificial and real images. Its speed depends on the size of the field and varies between 0.1 s and 0.3 s for 480 X 512 pixel images for implementation on a PC-386 computer. A correction is proposed for local distortions in the contour line based on an estimation of the oriented curvature. Experimental results with real portal images demonstrate the reliability of the approach and its suitability for clinical implementation.
Enhancement of x-ray fluoroscopy images
Author(s):
Ajit Singh;
David L. Wilson;
Richard Aufrichtig
Show Abstract
We describe a recursive, intensity compensation technique to enhance x-ray image sequences by reducing noise while minimizing motion blur. Our method incorporates a Poisson noise model to account for quantum limited fluoroscopic imaging. Further, we recognize that motion in x-ray fluoroscopy results in both lateral movements of 'constant' pixel values when a catheter moves across the screen, and changes in gray-scale value at a given pixel location as a catheter moves across it. Our model of the time varying image sequences assumes a composition of two processes: (1) an underlying primary process in which the intensity is stationary, or slowly varying, and (2) a secondary process characterized by motion discontinuities. Unlike previous motion compensated filtering methods, our intensity compensation method does not require computation of flow fields or image warping. Hence the method is much less computationally demanding and much easier to implement in real- time. We have applied the technique to enhance a wide variety of fluoroscopic medical image sequences from cardiac and general angiography. In a representative 60 frames image sequence, the method reduces noise variance by 44% after just six frames, with no motion blur.
Adaptive image interpolation algorithm
Author(s):
A. Bob Mahmoodi
Show Abstract
An adaptive image interpolation algorithm is presented. This method will enable the interpolation kernel to be varied based on the local image information. The criteria for the variation is a decision mechanism which discriminates the text or high contrast areas from the graphic or soft contrast areas. Upon the decision, the algorithm will apply the appropriate interpolation kernel functions to interpolate the data. The interpolation kernel for text area utilizes a 2 X 2 convolution kernel using a fifth (5th) order interpolation polynomial. The graphic areas of the image file will use a 4 X 4 kernel with a cubic spline function. The objective of the method is to reduce the aliasing or ringing effect associated with the interpolation of high spatial frequency (text) areas of the image file. In order to identify the two areas (text vs graphic) a 4 X 4 pixel window is chosen. The sample mean and deviation of this window is calculated. Further the sample mean and deviation of the inner 2 X 2 block of the 4 X 4 window is also determined. The ratio of the sample deviation of 2 X 2 to the sample deviation of 4 X 4 window is then compared to a preset threshold discriminator level.
Retina vascular network recognition
Author(s):
Guido Tascini;
Giorgio Passerini;
Paolo Puliti;
Primo Zingaretti
Show Abstract
The analysis of morphological and structural modifications of the retina vascular network is an interesting investigation method in the study of diabetes and hypertension. Normally this analysis is carried out by qualitative evaluations, according to standardized criteria, though medical research attaches great importance to quantitative analysis of vessel color, shape and dimensions. The paper describes a system which automatically segments and recognizes the ocular fundus circulation and micro circulation network, and extracts a set of features related to morphometric aspects of vessels. For this class of images the classical segmentation methods seem weak. We propose a computer vision system in which segmentation and recognition phases are strictly connected. The system is hierarchically organized in four modules. Firstly the Image Enhancement Module (IEM) operates a set of custom image enhancements to remove blur and to prepare data for subsequent segmentation and recognition processes. Secondly the Papilla Border Analysis Module (PBAM) automatically recognizes number, position and local diameter of blood vessels departing from optical papilla. Then the Vessel Tracking Module (VTM) analyses vessels comparing the results of body and edge tracking and detects branches and crossings. Finally the Feature Extraction Module evaluates PBAM and VTM output data and extracts some numerical indexes. Used algorithms appear to be robust and have been successfully tested on various ocular fundus images.
Advances in automated lung segmentation in CT studies
Author(s):
Francoise J. Preteux;
Philippe Grenier M.D.;
Pierre Vanier
Show Abstract
In this paper, we address the problem of lungs segmentation in high resolution CT studies. Specifically, we present three segmentation method based on (1) self-adaptive erosion, (2) connection cost and the topographical skeleton by influence zones, and (3) active contour modeling. These methods are shown to provide consistent theoretical and practical tools for solving the difficult problem in hand.
Highlighting the differences between the positive and pseudo-positive cyto-architectonics
Author(s):
Sing T. Bow;
Jian Zhang;
Xia-fang Wang
Show Abstract
A great deal of research has been carried out by biologists, pathologists, as well as biomedical physicists on the studies of the cyto-architectonics in various types of cells with symptoms of cancerous disease. Color and texture of the cell and their interrelationships are the important basis for cell analyses, and prove successful to differentiate abnormal cells from normal ones. However, to classify the abnormal cells into two categories, namely cancerous cells and non- cancerous cells but with pseudo-positive phenomenon, more information are needed in addition to those obtained only from the observations on the microscopic images of the smear, and therefore, a further step, such as biopsy, has to be taken. In this paper, color image processing technique is introduced to highlight the differences between the positive and the pseudo-positive cyto-architectonics so as to help increase the visualization and diagnostic capability of an human expert. It is hoped that it would come out to be an effective tool for the screening of the pseudo-positive non-cancerous cells from the cancerous ones even when they look alike under microscope.
Computer aided morphometry of the neonatal fetal alcohol syndrome face
Author(s):
Lawrence Chik;
Robert J. Sokol;
Susan S. Martier
Show Abstract
Facial dysmorphology related to Fetal Alcohol Syndrome (FAS) has been studied from neonatal snapshots with computer-aided imaging tools by looking at facial landmarks and silhouettes. Statistical methods were used to characterize FAS-related midfacial hypoplasia by using standardized landmark coordinates of frontal and profile snapshots. Additional analyses were performed by tracing a segment of the facial silhouettes from the profile snapshots. In spite of inherent distortions due to the coordinate standardization procedure, controlled for race, three significant facial landmark coordinates accounted for 30.6% of the explained variance of FAS. Residualized for race, eight points along the silhouettes were shown to be significant in explaining 45.8% of the outcome variance. Combining the landmark coordinates and silhouettes points, 57% of the outcome variance was explained. Finally, including birthweight with landmark coordinates and silhouettes, 63% of the outcome variance was explained, with a jackknifed sensitivity of 95% (19/20) and a specificity of 92.9% (52/56).
Adaptive human-computer interface easing image processing in clinical environment
Author(s):
Virginie Chameroy;
Florent Aubry;
Alain Giron;
Andrew Todd-Pokropek;
Robert Di Paola
Show Abstract
The clinical use of image analysis requires on the one hand a knowledge of pathology, physiology and other medical fields, and on the other hand expertise in image processing techniques. Because of the increasing complexity, these techniques are not often employed by clinical users, who want to focus on interpreting results rather than focusing on the choice of a specific package, the way it operates and the underlying processes. Thus it is of prime importance to clinical users to have at their disposal a system which can accept their clinical knowledge as input, then convert it into an appropriate mathematical language, process it, and finally return the results to them in a clinically intelligible fashion. The concept of an 'intelligent system' interface between medical users and applications has been developed, which we have termed an Interactive Quantitation Support System (IQSS). Such a system translates clinical knowledge into symbolic descriptions and transmits them to the software application. IQSS manages the dialogue between user and application, as well as error handling. A prototype of the user interface has been developed based on a client/server architecture and a data-oriented environment. Early in its development, this IQSS has been tested in prototype form to perform a three-dimensional registration of multimodality images.
Unsupervised classification of multiecho magnetic resonance images of the pediatric brain with implicit spatial and statistical hypotheses validation
Author(s):
James B. Perkins;
Ian R. Greenshields;
Francis DiMario M.D.;
Gale Ramsby M.D.
Show Abstract
We describe an image segmentation method applied to multi-echo MR images which is unsupervised in that the analyst need not specify prototypical tissue signatures to guide the segmentation. It is well known that different tissue types may be distinguished by their signatures in NMR parameter space (spin density and relaxation parameters T1 and T2). Also, normal tissue may be differentiated from abnormal by means of these signatures. Even though pixel intensity is proportional to weighted mixtures of these parameters in real images several researchers feel there is potential for better segmentation results by processing dual-echo images. These images are inherently registered and require no additional time to acquire the image for the second echo. Our segmentation procedure is a multi-step process in which tissue class mean vectors and covariance matrices are first determined by a clustering technique. The goal here is to achieve an intermediate segmentation which may be subject to quantitative validation.
Optimal metric for factor analysis of medical image sequences
Author(s):
Habib Benali;
Frederique Frouin;
Irene Buvat;
Florent Aubry;
F. Coillet;
Jean Pierre Bazin;
Robert Di Paola
Show Abstract
A new statistical approach for Factor Analysis of Medical Image Sequences (FAMIS) is proposed. It leads to the optimal metric to be used in the orthogonal and oblique analysis steps of FAMIS. It is shown that this metric depends on the statistical model related to the image acquisition process and we derived its expression for nuclear medicine and magnetic resonance imaging. A scintigraphic dynamic study illustrates the method. We discuss the normalization induced by this optimal metric in comparison with other normalizations.
Magnetic resonance voxel labeling based on Bayesian Decision Theory
Author(s):
Rudi Verbeeck;
Dirk Vandermeulen;
Paul Suetens;
Guy Marchal
Show Abstract
In this paper, Bayesian decision theory is applied to the labelling of voxels in Magnetic Resonance (MR) images of the brain. The Bayes optimal decision rule defines a cost function that consists of a loss function weighted by the a posteriori probability of the labelling. Two options for the loss function are presented in this paper. A zero-one loss function gives rise to the maximum a posteriori (MAP) estimate, which requires a simulated annealing optimization process. The probability term of the cost function is the product of the a priori probability of the labelling (or an a priori model of the underlying scene) and the conditional probability of the data, given the labelling (or the model for the imaging modality). By modelling the label image as a Markov random field, the model for the underlying scene can be described by a Gibbs distribution. In the application discussed, here, they reflect the compatibility of anatomical structures. The imaging method represents the expected voxel intensities and possible noise or image distortions.
Characterization of the mammographic appearance of microcalcifications: applications in computer-aided diagnosis
Author(s):
Robert M. Nishikawa;
Yulei Jiang;
Maryellen Lissak Giger;
Carl J. Vyborny;
Robert A. Schmidt;
Ulrich Bick
Show Abstract
This paper describes the application of the area and contrast of mammographic microcalcifications to computer-aided diagnostic schemes. Image contrast (measured in differences in optical density on the film) is converted to radiation contrast (in terms of log x- ray exposure) by correcting for the characteristic curve of the screen-film system and by correcting for the loss in contrast caused by the blurring by the screen and the film digitizer. From the radiation contrast, we estimate an effective thickness of a microcalcification that would have produced the corresponding radiation contrast. By examining the relationship between effective thickness and size of computer-detected signals (potential microcalcifications), the false-positive rate of our automated detection scheme can be reduced from 2.5 to 1.5 false clusters per image, while maintaining a sensitivity of 85%. We have also conducted two preliminary studies for which the extraction technique may be beneficial. The first was for classifying clusters as either benign or malignant. Four features were identified: the standard deviation in area, thickness, and effective volume of microcalcifications within a given cluster, and the mean effective volume of microcalcifications within the cluster. The second study was for developing a quantitative measure of the subtlety of appearance of microcalcifications in mammograms. We have found that the product of the area and image contrast summed over all microcalcifications within a cluster correlates well with human subjective impression of subtlety.
Classification of medical images using context dependent methods
Author(s):
Ted R. Jackson;
James R. Brookeman;
Michael B. Merickel
Show Abstract
We are developing a method to automatically classify multispectral medical images using context dependent methods. The model is built with the knowledge that cluster of tissue features will overlap in feature space. The goal is to reduce the classification error that results from this cluster overlap. Initialization of the probability of a pixel belonging to a tissue class can take advantage of a priori class distributions if such knowledge exists. Otherwise, the procedure can resort to modeling each class with a Gaussian distribution. These probabilities can then be iteratively updated using either a relaxation labeling algorithm or a Markov random fields algorithm. Once the model converges, iterations cease and each pixel is classified using the maximum probability for all classes.
Multispectral analysis and visualization of multiple sclerosis lesions in MR volumes of the brain
Author(s):
Ross Mitchell;
Stephen J. Karlik;
Donald H. Lee M.D.;
Aaron Fenster
Show Abstract
MRI is a valuable tool in the diagnosis of multiple sclerosis (MS). Standard MR protocols for imaging MS produce proton density (PD) and T2 weighted images of the same slice in the brain. While these image pairs provide valuable information about MS lesions, they are two dimensional (2-D) while lesions are three dimensional (3-D). Furthermore, the vast amount of data produced in an MR exam for MS makes routine analysis and comparison of the image pairs difficult. Therefore, we have developed a computerized system which employs multi- spectral analysis techniques to allow interactive 3-D analysis of MR data by radiologists and neurologists. We have used our system to classify and analyze four MR exams of a chronic- progressive MS patient taken over an 18 month period. Comparison of volume renderings of classified white matter, grey matter and MS lesions at each exam date provide information about the changes in individual lesions, and total lesion burden. Analysis of the intensity distributions of large MS lesions reveals that they have a wide range of PD/T2 weighted intensities, and some contain a higher PD/longer T2 'core' perhaps corresponding to edema.
Color image analysis for liver tissue images
Author(s):
Yung-Nien Sun;
Chung-Hsien Wu;
Xi-Zhang Lin;
Nan-Haw Chou
Show Abstract
An automatic tissue characterization system is always in great demand by pathologists. However, the existing methods are either too simple to classify a complicated liver tissue image or dependent on heavy human intervention and very time consuming. In this paper, we have developed a highly parallel and effective system based on color image segmentation to analyze liver tissue images. To simplify the tissue classification problem, the system first utilizes the achromatic information (the intensity) to coarsely segment the tissue image, then makes use of the chromatic information to classify the segmented regions into four different tissue classes. Thus, the proposed method includes an unsupervised probabilistic relaxation segmentation process and a supervised Bayes classification process. Because the invariant grey level and color properties of the liver tissue image are fully utilized, the difficult classification problem can be well fulfilled at a reasonable computational cost. The proposed method also shows reliable liver tissue classification results from different test sample sets.
Automated analysis for microcalcifications in high-resolution digital mammograms
Author(s):
Laura N. Kegelmeyer;
John A. Moreno Hernandez;
Clinton M. Logan
Show Abstract
Digital mammography offers the promise of significant advances in early detection of breast cancer. Our overall goal is to design a digital system which improves upon every aspect of current mammography technology: the x-ray source, detector, visual presentation of the mammogram and computer-aided diagnosis capabilities. This paper will discuss one part of our whole-system approach--the development of a computer algorithm using gray-scale morphology to automatically analyze and flag microcalcifications in digital mammograms in hopes of reducing the current percentage of false-negative diagnoses, which is estimated at 20%. The mammograms used for developing this 'mammographers assistant' are film mammograms which we have digitized at either 70 micrometers or 35 micrometers per pixel resolution with 4096 (12 bits) of gray level per pixel. For each potential microcalcification detected in these images, we compute a number of features in order to distinguish between the different kinds of objects detected.
Boundary estimation method for ultrasonic 3D imaging
Author(s):
Gosuke Ohashi;
Akihisa Ohya;
Michiya Natori;
Masato Nakajima
Show Abstract
The authors developed a new method for automatically and efficiently estimating the boundaries of soft tissue and amniotic fluid and to obtain a fine three dimensional image of the fetus from information given by ultrasonic echo images. The aim of this boundary estimation is to provide clear three dimensional images by shading the surface of the fetus and uterine wall using Lambert shading method. Normally there appears a random granular pattern called 'speckle' on an ultrasonic echo image. Therefore, it is difficult to estimate the soft tissue boundary satisfactorily via a simple method such as threshold value processing. Accordingly, the authors devised a method for classifying attributes into three categories using the neural network: soft tissue, amniotic and boundary. The shape of the grey level histogram was the standard for judgment, made by referring to the peripheral region of the voxel. Its application to the clinical data has shown a fine estimation of the boundary between the fetus or the uterine wall and the amniotic, enabling the details of the three dimensional structure to be observed.
MRI feature extraction using a linear transformation
Author(s):
Hamid Soltanian-Zadeh;
Joe P. Windham;
Donald J. Peck
Show Abstract
We present development and application of a feature extraction method for magnetic resonance imaging (MRI), without explicit calculation of tissue parameters. We generate a three-dimensional (3-D) feature space representation of the data, in which normal tissues are clustered around pre-specified target positions and abnormalities are clustered somewhere else. This is accomplished by a linear minimum mean square error transformation of categorical data to target positions. From the 3-D histogram (cluster plot) of the transformed data, we identify clusters and define regions of interest (ROIs) for normal and abnormal tissues. There ROIs are used to estimate signature (feature) vectors for each tissue type which in turn are used to segment the MRI scene. The proposed feature space is compared to those generated by tissue-parameter-weighted images, principal component images, and angle images, demonstrating its superiority for feature extraction. The method and its performance are illustrated using a computer simulation and MRI images of an egg phantom and a human brain.
Morphological interpolation between contour lines
Author(s):
Eric N. Mortensen;
William A. Barrett
Show Abstract
A morphological algorithm for automated interpolation between contour lines is presented. Algorithms based on morphological transforms allow interpolation to be performed directly in image space without the need to explicitly extract or represent contour data using intermediate data structures. Image space operations also allow parallel generation of interpolated contour values while making essential use of neighboring contour morphology. Recursive morphological transforms allow all contour intervals to be processed in constant time regardless of width. For m intercontour intervals, the number of contours calculated in parallel increases O(2m) with each recursion. The algorithm is applied successfully to a variety of synthetic nested (nonoverlapping) contours as well as overlapping and/or displaced contours. It is also applied to naturally occurring contours extracted from medical scans. The special case of branching is handled automatically without algorithm modification.
Automated organ recognition using 3D mathematical morphology
Author(s):
John P. Strupp;
Robert M. Haralick
Show Abstract
A method for fully automated organ recognition in 3D medical image volumes is investigated. A mathematical model for organ recognition is presented which exploits the fact that although the precise anatomy among patients differs, the basic shape of organs is consistent as are the spatial relationships between organs. 3D mathematical morphology procedures based on this model are developed using a description to algorithm translation method. The procedures first isolate an organ search volume based on the location of other organs and then extracts the goal organ using shape criteria encoded in structuring elements.
Analysis of tissue information on medical image using fractal dimensions
Author(s):
Takeshi Matozaki;
Satoshi Koyanagi;
T. Ikeguchi
Show Abstract
Three-dimensional reconstruction of tissue from images of X-ray CT and MR CT, is useful for diagnosis and surgical operation. However, it is often difficult to recognize and extract accurate tissue information from images using simple binarization or edge detection. Organ shapes are complex, but it is said that they should have fractal property. So we are likely to introduce fractal theory for extraction or discrimination of tissue. In this paper, we analyze tissue image data using not only average and variance but also three fractal dimensions, and classify them to three categories (brain, eye and neck) by two dimensional discriminant analysis. As a result, discriminant rate are over 80%. It is useful to introduce fractal dimension on multivariate analysis of tissue images.
Study of fractal dimension in chest images using normal and interstitial lung disease cases
Author(s):
Douglas M. Tucker;
Jose L. Correa;
Miguel Souto;
Katerina S. Malagari
Show Abstract
A quantitative computerized method which provides accurate discrimination between chest radiographs with positive findings of interstitial disease patterns and normal chest radiographs may increase the efficacy of radiologic screening of the chest and the utility of digital radiographic systems. This report is a comparison of fractal dimension measured in normal chest radiographs and in radiographs with abnormal lungs having reticular, nodular, reticulonodular and linear patterns of interstitial disease. Six regions of interest (ROI's) from each of 33 normal chest radiographs and 33 radiographs with positive findings of interstitial disease were studied. Results indicate that there is a statistically significant difference between the distribution of the fractal dimension in normal radiographs and radiographs where disease is present.
Three-dimensional modeling of lung morphogenesis using fractals
Author(s):
Theophano Mitsa;
Jiang Qian;
Jeffrey R. Galvin
Show Abstract
The bronchial tree is one of the most well known fractal structure in the human body. Fractal objects like the bronchial tree have complex structures with self-similar properties over different scales. The fractal structure of the bronchial tree is imposed by optimization of resource utilization requirements in the lung, such as efficient distribution of blood and air. Thus, the morphology of the lung is directly related to its function and changes in its structure can be linked to disfunction. Since the bronchial tree is a fractal structure, its fractal dimension can be used as a tool for the detection of structure changes and quantification of lung disease. In this paper, we present an algorithm for the construction of a 3-D bronchial tree model based on fractal growth rules and actual morphometric data. The 3-D fractal dimension of the model is computed and subsequently compared with the 3-D fractal dimension of a bronchial tree segmented from normal actual lung data.
Using local extremum curvatures to extract anatomical markers from medical images
Author(s):
Lionel Le Briquer;
Frederic Lachmann;
Christian Barillot
Show Abstract
Among the studies concerning the segmentation and the identification of anatomical structures from medical images, one of the major problems is the fusion of heterogeneous data for the recognition of these structures. In this domain, the fusion of inter-patient data for the constitution of anatomical models for instance is particularly critical especially with regards to the identification of complex cerebral structures like the cortical gyri. The goal of this work is to find anatomical markers which can be useful to characterize specific regions in brain images by using either CT or MR images. We have focused this study on the definition of a geometrical operator based on the detection of local extremum curvatures. The main issues addressed by this work concern the fusion of multimodal data from one patient (e.g. between CT and MRI) and moreover the fusion of inter-patient data as a first step toward the modelling of brain morphological deformations. Examples are shown upon 2D MR and CT brain images.
Hexagonal wavelet processing of digital mammography
Author(s):
Andrew F. Laine;
Sergio Schuler;
Walter Huda;
Janice C. Honeyman-Buck;
Barbara G. Steinbach
Show Abstract
This paper introduces a novel approach for accomplishing mammographic feature analysis through overcomplete multiresolution representations. We show that efficient representations may be identified from digital mammograms and used to enhance features of importance to mammography within a continuum of scale-space. We present a method of contrast enhancement based on an overcomplete, non-separable multiscale representation: the hexagonal wavelet transform. Mammograms are reconstructed from transform coefficients modified at one or more levels by local and global non-linear operators. Multiscale edges identified within distinct levels of transform space provide local support for enhancement. We demonstrate that features extracted from multiresolution representations can provide an adaptive mechanism for accomplishing local contrast enhancement. We suggest that multiscale detection and local enhancement of singularities may be effectively employed for the visualization of breast pathology without excessive noise amplification.
Discrete image stacks verifying the diffusion equation for mulitresolution image processing
Author(s):
Christophe Dary;
Yves J. Bizais;
Jeanpierre V. Guedon;
Laurent Bedat
Show Abstract
In practical situations, images are discrete and only discrete filtering can be performed, such that the above theory must be adapted accordingly. In this paper, we derive the filter family which must replace the Gaussian kernel, in this case. The result can be understood because the Fourier transform of the second derivative corresponds to the multiplication by the square of the frequency, such that our filter is the discrete version of a Gaussian. In other words, our approach consistently generalizes the continuous theory to the discrete case. When the discrete equivalent of the Laplacian is defined on the basis of n-order B-spline interpolating functions, the image stack exactly verifies the continuous diffusion equation at the spatially sampled points. These results are generalized to any linear partial differential operator corresponding to another requirement on the image stack, just by defining the discrete equivalent operator.
Medical imaging system for the recognition of brain scans
Author(s):
M. D. Seshadri;
Don W. Miller
Show Abstract
A medical image processing system for the recognition of CT brain scan images has been developed. This system was successfully tested on its ability to correctly classify the human brain scans as 'normal', 'haemmoraged', and 'lacunar infarcted'. The imaging system composed of a variation of Laplacian of the Gaussian (LoG) edge detector, a chain encoder, the Hough transform, and a backprop neural network. The edge detector output was fed into the chain coder which formed meaningful segments or groupings of some important features present in the image. These features were further processed by the Hough transform to identify any analytical shapes in these features or clusters. All this information was processed so that with minimal user input the imaging system determined the size and the shape of some feature such as the third ventricle in a brain scan. The neural network was presented with a seven vector input in the case of brain scans which resulted in a 3 bit output. This output was interpreted as a probability whether the given brain scan was 'normal', haemmoraged', or lacunar infarcted'. The backup correctly classified CT brain scans in approximately 95% of the test cases for normal images, and 85% of the cases for hemorrhaged and lacunar infarcted images.
Neural network diagnosis of avascular necrosis from magnetic resonance images
Author(s):
Armando Manduca;
Paul S. Christy;
Richard L. Ehman
Show Abstract
We have explored the use of artificial neural networks to diagnose avascular necrosis (AVN) of the femoral head from magnetic resonance images. We have developed multi-layer perceptron networks, trained with conjugate gradient optimization, which diagnose AVN from single sagittal images of the femoral head with 100% accuracy on the training data and 97% accuracy on test data. These networks use only the raw image as input (with minimal preprocessing to average the images down to 32 X 32 size and to scale the input data values) and learn to extract their own features for the diagnosis decision. Various experiments with these networks are described.
Neural network based segmentation system
Author(s):
Kelby K. Chan;
Alek S. Hayrapetian;
Christina C. Lau;
Robert B. Lufkin
Show Abstract
A neural network is used to segment double echo MR images. Images are acquired using an interleaved acquisition protocol that results in registered proton density and T2 weighted images. For each tissue class, a user selects approximately 15 - 20 points representative of the double echo signature of that tissue. This set of intensities and tissue classes are used as a pattern-target set for training a feed forward neural network using back propagation. The trained network is then used to classify all of the points in the dataset. Statistical testing of the network using pattern-target pairs distinct from those used in training showed roughly 90% correct classification for the selected tissues. The bulk of the error was due to ambiguities in classifying based solely on MR intensities. The resultant classified images can be further processed using special software that allows manual correction and interactive 2D or 3D connectivity analysis based on selection of seed points.
Neural network ultrasound image analysis
Author(s):
Alexander C. Schneider;
David G. Brown;
Mary S. Pastel
Show Abstract
Neural network based analysis of ultrasound image data was carried out on liver scans of normal subjects and those diagnosed with diffuse liver disease. In a previous study, ultrasound images from a group of normal volunteers, Gaucher's disease patients, and hepatitis patients were obtained by Garra et al., who used classical statistical methods to distinguish from among these three classes. In the present work, neural network classifiers were employed with the same image features found useful in the previous study for this task. Both standard backpropagation neural networks and a recently developed biologically-inspired network called Dystal were used. Classification performance as measured by the area under a receiver operating characteristic curve was generally excellent for the back propagation networks and was roughly comparable to that of classical statistical discriminators tested on the same data set and documented in the earlier study. Performance of the Dystal network was significantly inferior; however, this may be due to the choice of network parameter. Potential methods for enhancing network performance was identified.
Rayleigh task performance in tomographic reconstructions: comparison of human and machine performance
Author(s):
Kyle J. Myers;
Robert F. Wagner;
Kenneth M. Hanson
Show Abstract
We have previously described how imaging systems and image reconstruction algorithms can be evaluated based on the ability of machine and human observers to perform a binary- discrimination task using the resulting images. Machine observers used in these investigations have been based on approximations to the ideal observer of Bayesian statistical decision theory. The present work is an evaluation of tomographic images reconstructed from a small number of views using the Cambridge Maximum Entropy software, MEMSYS 3. We compare the performance of machine and human viewers for the Rayleigh resolution task. Our results indicate that for both humans and machines a broad latitude exists in the choice of the parameter (alpha) that determines the smoothness of the reconstructions. We find human efficiency relative to the best machine observer to be approximately constant across the range of (alpha) values studied. The close correspondence between human and machine performance that we have now obtained over a variety of tasks indicate that our evaluation of imaging systems based on machine observers has relevance when the images are intended for human use.
Techniques for multiple-signal multiple-reader evaluations
Author(s):
James W. Sayre;
James Lee;
Brent K. Stewart;
Minzhi Liu;
Samuel J. Dwyer III;
Michael F. McNitt-Gray;
H. K. Huang;
Glendon G. Cox;
Larry T. Cook
Show Abstract
Although receiver operating characteristic (ROC) analysis has been widely used for the evaluation of medical imaging systems, traditional ROC analysis can strictly be applied only to binary decision systems. As yet, no fully generalized ROC analysis has been developed to deal with the multiple alternative decision systems that are commonly encountered in the clinical setting. In this paper, we have developed a general method of analysis based on a Bayes decision framework applied to a multiple alternative decision model. As a simplified implementation of the general method, a K-nearest neighbor pattern classification strategy can be used. The application of this alternative operating characteristic analysis for the evaluation of two display formats for pediatric chest radiographs is demonstrated.
Separable and radial bases for medical image processing
Author(s):
Jeanpierre V. Guedon;
Yves J. Bizais
Show Abstract
The goal of this paper is to describe a consistent method which permits to define discrete image processing operators in the same way as discrete image formation operators. This is done via the use of the generalized sampling theorem which establishes the relationship between continuous and discrete functions according to the mean-square error in a spline or bandlimited subspace. A discrete operator is defined according to its continuous counterpart operating on continuous functions in the same subspace. Classical medical image acquisition bases often are radial where classical image processing operators are deduced from separable bases. The paper shows the trends between these two imperatives for medical image processing, explains where are the risks for information loss induced by implementing discrete linear operators and presents two methods to partially or totally keep the initial stored information.
Automatic development of physical-based models of organ systems from magnetic resonance imagery
Author(s):
Ian R. Greenshields;
Junchul Chun;
Gale Ramsby M.D.
Show Abstract
The essential goal of this work described herein is to provide a biophysical model within which the effects of the alteration of a variety of geometrical or physical variables within the CSF system can be explored. Our ultimate goal is to be able to divorce such models from the constraints of the artificial geometries (e.g., generalized cylinders) so typical of the usual biophysical model, and to this end we have determined that each structure to be modelled be developed from an actual in-vivo example of the structure, determined by extraction from CT or MR imagery. Onto such models we will then overlay a biophysical structure which will permit us to simulate a variety of different conditions and thereby determine (up to model accuracy) how the simulated condition might in fact impact the in vivo structure were it to be faced with a similar set of physical conditions.
Model-based prediction of phalanx radiograph boundaries
Author(s):
Tod S. Levitt;
Marcus W. Hedgcock M.D.;
D. N. Vosky;
Vera Michele Shadle
Show Abstract
In this study we use a strong model of the center of the phalanx in hand radiographs to predict continuation of the phalanx boundary. The center phalanx is robustly segmented using conventional approaches. Estimating tangents allows us to find the minimum width of the phalanx by parallel tangent lines. This in turn predicts the phalanx 'center'. The phalanx boundaries are modeled as cubic splines, and the coronal cross-section is modeled as a hemi- ellipse. Using initial localization from the phalanx center and the above parametric models, the model of radiographic imaging is used to predict the continuation of the boundary. Results are shown for least squared error of model spline fits to continuation of the phalanx boundary anchored by initial match on the center of the projected phalanx.
An expert system for the interpretation of radionuclide ventilation-perfusion lung scans
Author(s):
Frank V. Gabor;
Frederick L. Datz;
Paul E. Christian;
Grant T. Gullberg;
Kathryn A. Morton
Show Abstract
One of the most commonly performed imaging procedures in nuclear medicine is the lung scan for suspected pulmonary embolism. The purpose of this research was to develop an expert system that interprets lung scans and gives a probability of pulmonary embolism. Three standard ventilation and eight standard perfusion images are first outlined manually. Then the images are normalized. Because lung size varies from patient to patient, each image undergoes a two-dimensional stretch onto a standard-size mask. To determine the presence of regional defects in ventilation or perfusion, images are then compared on a pixel by pixel basis with a normal database. This database consists of 21 normal studies that represent the variation in activity between subjects. Any pixel that falls more than 2.2 standard deviations below the normal file is flagged as possibly abnormal. To reduce statistical fluctuations, a clustering criteria is applied such that each pixel must have at least two continuous neighbors that are abnormal for a pixel to be flagged abnormal.
System for computerized TV iris diagnostics
Author(s):
Vasyl V. Molebny;
Yuri Kolomatsky;
Serhi Chumak;
Mykola Vasko;
Tetyana Myrhorodska
Show Abstract
Iridodiagnostics, using the information, encoded in human iris, gives an integrated picture of human health, mirroring even preclinic states, genetic peculiarities and predispositions. To decode this information, TV image processing is used for automatic measurement of several diagnostic features, such as pupil ellipticity, pupil flattening, indenteness of the autonomous ring, its minima and maxima parameters, etc. An instrument setup is described for acquiring and processing TV image of an iris. In one of the variants, color image is produced with a black-and-white TV camera due to sequential R-, G-, and B-frames resulting from alternating color pulse illumination. For the sake of classification, sequential methodology was modified, performing multiple tests over the same data and permitting an adaptation in the process of learning.
Preliminary evaluation of an "intelligent" mammography workstation
Author(s):
Maryellen Lissak Giger;
Robert M. Nishikawa;
Robert A. Schmidt;
Carl J. Vyborny;
Ping Lu;
Yulei Jiang;
Zhimin Huo;
John Papaioannou;
Chris Yuzheng Wu;
S. Cox;
R. Kunst;
Ulrich Bick;
Katrina Rosculet
Show Abstract
We are developing computer-aided diagnosis (CAD) schemes for the detection of clustered microcalcifications and masses in digital mammograms. Here, CAD refers to a diagnosis made by a radiologist who uses the computerized analyses of radiographic images as a 'second opinion'. The radiologist would make the final diagnostic decision. The aim of CAD is to improve diagnostic accuracy by reducing the number of missed diagnoses. In this preliminary evaluation, 30 clinical cases from December 1991 having a focal mammographic finding were analyzed.
Ultrasound introscopic image quantitative characteristics for medical diagnosis
Author(s):
Mikhail K. Novoselets;
Sergey S. Sarkisov;
Alexander N. Gridko;
Anatoliy K. Tcheban
Show Abstract
The results on computer aided extraction of quantitative characteristics (QC) of ultrasound introscopic images for medical diagnosis are presented. Thyroid gland (TG) images of Chernobil Accident sufferers are considered. It is shown that TG diseases can be associated with some values of selected QCs of random echo distribution in the image. The possibility of these QCs usage for TG diseases recognition in accordance with calculated values is analyzed. The role of speckle noise elimination in the solution of the problem on TG diagnosis is considered too.
New high-performance 3D registration algorithms for 3D medical images
Author(s):
Andre M. F. Collignon;
Thierry Geraud;
Dirk Vandermeulen;
Paul Suetens;
Guy Marchal
Show Abstract
In this presentation a new search method is proposed to improve the speed and accuracy of surface based registration algorithms. Furthermore, a parallel point projection based, multicomponent distance evaluation method is presented. This method offers an elegant solution to the problem of partially overlapping data sets. An adaptive outlier treatment method is also presented. Combination of all these new techniques results in a faster surface based algorithm with better accuracy, but above all with better reliability then existing surface based 3D registration algorithms. In the context of the surface correspondence problem, surface based registration algorithms are compared to feature matching methods.
The magic crayon: an object definition and volume calculation testbed
Author(s):
David Volk Beard;
R. E. Faith;
David H. Eberly;
Stephen M. Pizer;
Charles Kurak;
Richard Eugene Johnston
Show Abstract
Rapid, accurate definition and volume calculation of anatomical objects is essential for effective CT and MR diagnosis. Absolute volumes often signal abnormalities while relative volumes--such as a change in tumor size--can provide critical information on the effectiveness of radiation therapy. To this end, we have developed the 'magic crayon' (MC) anatomical object visualization, object definition, and volume calculation tool as a follow on to UNC's Image Hierarchy Editor (IHE) and Image Hierarchy Visualizer (IHV). This paper presents the magic crayon system detailing interaction, implementation, and preliminary observer studies. MC has several features: (1) it uses a number of 3D visualization methods to visualize rapidly an anatomical object. (2) MC can serve as a test bed for various object definition algorithms. (3) MC serves as a testbed allowing the comparative evaluation of various volume calculation methods including pixel counting and Dr. David Eberly's divergence method.
Semi-automatic 3D description of vessel structure using fuzzy chip
Author(s):
Hiroshi Oyamada;
Satoshi Matsushita;
Masahiro Kusakabe;
Naoki Suzuki
Show Abstract
There are still several points to be clarified in the method of extraction of a certain region from soft tissues, but the high speed contour detection technique is most urgently needed for the reconstruction of real time 3 dimensional (3-D) images. This proceedings show our newly developed technique, using fuzzy inference taking the case of extraction of the contour of carotid artery from serial echographics. And to prove the correctness of our new method, we show the reconstructed 3-D image by piling up the contours which are extracted by our new method. The time required at fuzzy inference operation on the fuzzy chip is less than 1 ms excluding the data transfer time.
Attempt to extract 3D image of liver automatically out of abdominal MRI
Author(s):
Satoshi Matsushita;
Hiroshi Oyamada;
Masahiro Kusakabe;
Naoki Suzuki
Show Abstract
This paper describes our new method of automatic interpretation of cross sectional pictures of human body. In our case, 2-dimensional image of liver is extracted from abdominal MR images. And by reconstructing 3-dimensional (3-D) images, we had volumetric estimation. Fuzzy inference was utilized in extraction processing, so that our processing system could involve logic of image interpretation. Resulting volumetric measurement of our 3-dimensional estimation was underestimation by some 11%, when compared to that of a human expert's estimation. Our processing method is valid for segmentation of soft tissue, which is hard to be processed by only conventional image processing. And this method will lead to real time 3- dimensional image processing in the future.
Segmentation of the brain from 3D magnetic resonance images of the head
Author(s):
William T. Katz;
Michael B. Merickel;
John C. Goble;
Neal F. Kassell M.D.;
James R. Brookeman
Show Abstract
With the advent of fast 3D magnetic resonance imaging (MRI) sequences, truly 3D volumes of data can be routinely acquired. While images produced by modalities such as positron emission tomography (PET;) and single photon emission computed tomography (SPECT) depict functional information, newer MRI techniques like magnetization-prepared rapid gradient-echo (MP-RAGE) capture a great deal of ana tomical detail [3,10]. Such 3D images are used by clinicians in two types of tasks, visualization and quLantifi cation. Visualization, in its most basic form, permits a user to see structures of interest within a volume of data. The "structures of interest" may not correspond to any physically visible phenomenon (e.g. the quan tity of blood flow to neural tissue in a functional image) but the process of visualization transforms the data into images which may then be displayed using computer graphics. On the other hand, quantification, while often depicted graphically, attempts to reduce the mass of data into numbers useful as clinical indicators. A major obstacle to both of these tasks is the prerequisite image segmentation. In order for the brain to be visualized beneath the overlying head, a segmentation step must determine which volume elements, or voxels, in the 3D head image correspond to the brain. Similarly, quantification requires the distinguishing of background voxels from voxels corresponding to the VOl. A growing number of clinical studies depend on volume measurements after segmentation. Examples include: tracking the progression/remission of disease processes (e.g. the size of intracranial tumors); evaluating the neuroanatomical abnormalities associated with schizophrenia (e.g. ventricular volumes); and determining atrophy associated with Alzheimer-type dementia and temporal lobe epilepsy (as in the hippocampal formation).
Segmentation of magnetic resonance images into n(0, sigma) stationary regions
Author(s):
Ian R. Greenshields;
A. Zoe Leibowitz;
Francis DiMario M.D.;
Gale Ramsby M.D.
Show Abstract
Recall that a random field X(t1 ,t2) = X(t) over R2 is called hornogenous when its mean value (X(t)) = in (1) is a constant, while its core1ation function (X(t1),X(t2)) = B(t1,t2) depends only on the vector 'r t1 — t2, whence B(t1,t2) = B(ti — t2) (2) Absolute precision would require that a random field satisfying (1) and (2) be referred to as a widesense homogeneous random field, since it is not difficult to define strictly homogeneous random fields, which are coiiceptually related to the usual strictly stationary random process[1]. In the following, the term homogeneous field should be taken to mean wide-sense homogeneous field. Sometimes, imaging literature will interchange the terms stationary and homogeneous[2]. This is unfortunate but unavoidable in an imaging context.
Monte Carlo simulations of image stacking
Author(s):
Mesut Sahin;
David L. Wilson
Show Abstract
In image stacking, we combine multiple x-ray angiography images with incomplete arterial filling into a single output image with more completely filled arteries. Among other applications, image stacking is useful in neuroangiography embolization and in CO2 angiography. Using Monte Carlo simulations and tests on clinical image sequences, we compare three methods: (1) traditional extreme-intensity (EI) which consists of a max-dark or max-light operation on the sequence, (2) matched filtering (MF) with spatially varying parameters, and (3) a new algorithm, trimmed-extreme-intensity (TEI). In the simulations, we use Poisson noise and model the time-course of the arterial contrast signal with a gamma variate curve. The figure of merit for comparisons is the contrast-to-noise (CNR) ratio. We find that our spatially-dependent MF method works well with image which have a well-defined direction of flow as in the legs, but not with more complex flow patterns as in neuroangiography. On clinical images, TEI gives good results and is more robust than MF.
An image processing algorithm for PPCR imaging
Author(s):
Arnold R. Cowen;
Anthony Giles;
Andrew G. Davies;
A. Workman
Show Abstract
During 1990 The UK Department of Health installed two Photostimulable Phosphor Computed Radiography (PPCR) systems in the General Infirmary at Leeds with a view to evaluating the clinical and physical performance of the technology prior to its introduction into the NHS. An issue that came to light from the outset of the projects was the radiologists reservations about the influence of the standard PPCR computerized image processing on image quality and diagnostic performance. An investigation was set up by FAXIL to develop an algorithm to produce single format high quality PPCR images that would be easy to implement and allay the concerns of radiologists.
Neural network for reconstructing the cross sections of coronary arteries from biplane angiograms
Author(s):
Ruye Wang;
Duy Dong Nguyen;
Jack Sklansky;
Robert Bahn
Show Abstract
In this paper we describe a new approach for approximately reconstructing a 2D binary pattern from its two orthogonal 1D projections, under the constraint that the shape of the reconstructed binary pattern must represent a typical cross section of a partially occluded coronary artery. Our method consists of two parts: classification by a neural network, which selects the basic shape and orientation of the cross section from a low resolution version of the projections; and a heuristic search which reconstructs the cross sectional shape from the high resolution projection data.
Contrast enhancement of mammogram by image processing
Author(s):
Ying Xiong;
Chan F. Lam;
G. Donald Frey;
Marilyn R. Croley
Show Abstract
Digital mammography has great potential effect on the management of breast cancer. With an appropriate image processing technique, the sensitivity and specificity of tumor detection in mammography may be improved further. Small objects such as microcalcifications in mammograms have low contrast because of the high x-ray penetration of the objects, scattered radiation, and the limited capability of film to develop maximum contrast over an extended range of exposure values. Low contrast tumors may be missed by radiologists during mammogram reading. Standard image processing techniques, such as gray-level histogram equalization, spatial filtering and/or unsharp masking, do not perform well on mammographic images due to large variation in feature size and shape as well as the variation of normal background tissues. This paper presents a technique based on local contrast histogram modification. We associate a contrast value with each pixel in the image. The contrast of a pixel with respect to its background is enhanced by modifying the contrast histogram. The following section gives a brief review of contrast enhancement techniques for mammographic images.
Computer-assisted diagnosis of lung nodule detection using artificial convoultion neural network
Author(s):
Shih-Chung Benedict Lo;
Jyh-Shyan Lin;
Matthew T. Freedman M.D.;
Seong Ki Mun
Show Abstract
Several fuzzy assignment methods for the output association with convolution neural network are proposed for general medical image pattern recognition. A non-conventional method of using rotation and shift invariance is also proposed to enhance the neural net performance. These methods in conjunction with the convolution neural network technique are generally applicable to the recognition of medical disease patterns in gray scale imaging. The structure of the artificial neural network is a simplified network structure of neocognitron. Two- dimensional local connection as a group is the fundamental architecture for the signal propagation in the convolution (vision type) neural network. Weighting coefficients of convolution kernels are formed by neural network through backpropagated training for this artificial neural net. In addition, radiologists' reading procedure was modeled in order to instruct the artificial neural network to recognize the pre-defined image patterns and those of interest to experts. We have tested this method for lung nodule detection. The performance studies have shown the potential use of this technique in a clinical environment. Our computer program uses a sphere profile double-matching technique for initial nodule search. We set searching parameters in a highly sensitive level to identify all potential disease areas. The artificial convolution neural network acts as a final detection classifier to determine if a disease pattern is shown on the suspected image area. The total processing time for the automatic detection of lung nodules using both pre-scan and convolution neural network evaluation is about 10 seconds in a DEC Alpha workstation.
Interactive alignment and subtraction of two tomographic 3D imaging studies
Author(s):
Michael J. Flynn;
Jeanne Li;
Dianna D. Cody
Show Abstract
Three-dimensional tomographic data sets are routinely produced in CT and MRI studies. Particularly good quality sagittal and coronal views can be obtained when the z-slice thickness is similar to the x and y pixel size within the original transverse views. When image data has been acquired on the same subject at two separate occasions, it may be useful or necessary to rotate and translate the data from the second study so that it is spacially aligned with the first study. We have developed interactive graphic software to interpolate image files in three orthogonal planes which can be arbitrarily oriented and to align the data from two studies using subtraction views as an indicator of alignment and differential value. The design elements for this software are described in this paper. Two thin slice x-ray CT studies from the same subject are used to illustrate the software.
Strategic significance of object-oriented design
Author(s):
James M. Coggins
Show Abstract
Object-Oriented Programming is enabled by an advance in compiler technology and programming language design supporting encapsulation and inheritance. This technical adjustment has had a surprisingly broad impact on strategies for design and development of software. The methods for employing Object-Oriented Programming in software development are called Object-Oriented Design. This paper explains what Object-Oriented Programming is, why it has attracted so much interest, and then critically examines its potential impact.
Object-oriented image data model for a knowledge-based image processing system
Author(s):
Yves J. Bizais;
Anne-Marie Forte;
Jeanpierre V. Guedon;
D. Corbard;
F. Calmel;
Franck Lavaire
Show Abstract
For the last two years, we have been developing a medical image processing system driven by a knowledge-based system, which has been partially presents at the last SPIE Medical Imaging conference. In short, it consists of three modules: (1) an expert system (ES) which handles generic knowledge about image processing, image sources and medicine, and specific knowledge for every developed application. Consequently, it knows why, under which circumstances and in which environment an image processing tool must be achieved. (2) a relational data base (rDB) on which the ES may perform requests to select image data for an application. (3) an image processing (IP) toolbox which is able to run procedures according to the ES specifications on data pointed to by the rDB. In other words the IP toolbox knows how to run a procedure but not why.
Introduction to Bayesian image analysis
Author(s):
Kenneth M. Hanson
Show Abstract
The basic concepts in the application of Bayesian methods to image analysis are introduced. The Bayesian approach has benefits in image analysis and interpretation because it permits the use of prior knowledge concerning the situation under study. The fundamental ideas are illustrated with a number of examples ranging from a problem in one and two dimensions to large problems in image reconstruction that make use of sophisticated prior information.
Medical image diagnoses by artificial neural networks with image correlation, wavelet transform, simulated annealing
Author(s):
Harold H. Szu
Show Abstract
Classical artificial neural networks (ANN) and neurocomputing are reviewed for implementing a real time medical image diagnosis. An algorithm known as the self-reference matched filter that emulates the spatio-temporal integration ability of the human visual system might be utilized for multi-frame processing of medical imaging data. A Cauchy machine, implementing a fast simulated annealing schedule, can determine the degree of abnormality by the degree of orthogonality between the patient imagery and the class of features of healthy persons. An automatic inspection process based on multiple modality image sequences is simulated by incorporating the following new developments: (1) 1-D space-filling Peano curves to preserve the 2-D neighborhood pixels' relationship; (2) fast simulated Cauchy annealing for the global optimization of self-feature extraction; and (3) a mini-max energy function for the intra-inter cluster-segregation respectively useful for top-down ANN designs.