Surface fitting approach to the correction of spatial intensity variations in MR images
Author(s):
Benoit M. Dawant;
Alex P. Zijdenbos;
Richard A. Margolin
Show Abstract
Spatial intensity variations introduced by non-uniformity in the radio frequency (rf) coil of most MR scanners in clinical use preclude the selection of a single threshold for the separation of tissues across the entire image. Also, they increase the variance of the data, thus reducing the efficacy of numerical classifiers designed for the automatic segmentation of these images. In this paper, we present a new technique for the correction of spatial intensity variations. The methods require the labeling of a number of reference points across the image to which intensity surfaces are fitted. These surfaces are then utilized to correct the original images. Results obtained with phantom and patient data demonstrate the robustness of the approach and its impact on the classification results obtained with automatic classifiers.
Image reconstruction from nonuniform data and threshold crossings using Gram-Schmidt procedure
Author(s):
Yongwan Park;
Mehrdad Soumekh
Show Abstract
This paper addresses the problem of reconstructing an image from its nonuniform data and threshold crossings. The problem of reconstructing a two-dimensional signal from its nonuniform data arises in certain medical image problems where either the measurement domain is nonuniform or the measured data are translated to nonuniform samples of the desired image. Reconstruction from threshold crossings has significance in reducing the size of database (image compression) required to store medical images. In this paper, we introduce a deterministic processing via Gram-Schmidt orthogonalization to reconstruct images from their nonuniform data or threshold crossings. This is achieved by first introducing non-orthogonal basis functions in a chosen two-dimensional domain (e.g., for band-limited signal, a possible choice is the two dimensional Fourier domain of the image) that span the signal subspace of the nonuniform data. We then use the Gram-Schmidt procedure to construct a set of orthogonal basis functions that span the linear signal subspace defined by the above-mentioned non-orthogonal basis functions. Next, we project the N-dimensional measurement vector (N is the number of nonuniform data or threshold crossings) into the newly constructed orthogonal basis functions. Finally, the image at any point can be reconstructed by projecting its corresponding basis function on the projection of the measurement vector into the orthogonal basis functions.
Reconstruction from limited projection data
Author(s):
Oscar H. Kapp;
Chin-Tu Chen
Show Abstract
An outstanding problem in reconstruction methodology is the treatment of incomplete data sets. The inexact reconstruction technique (IRT) allows a stochastic approach, carried out in real space, which provides substantial improvement in reconstruction accuracy when compared to the standard filtered backprojection algorithm. This technique relies specifically on an iterative approach to the treatment of the probability related matrix which is assumed to be proportional to the original density distribution (original object matrix). This is obtained by the summation or multiplication of the set of probability matrices generated by back-projecting the known projections after the usual reorientation to correct for the angular projection to which each corresponds. Employing a Boolean constraint, the largest value in the probability matrix is located and an `l' is placed at the same coordinates in a blank array. This point in the probability matrix is then set to zero and the matrix projections are taken at the same angular orientations followed by re-backprojection to generate a new probability matrix. This process is repeated until the probability matrix is depleted or a specified mass is reached in the reconstructed object. The algorithm requires a considerable amount of computer time due to the necessity of recreating the probability matrix after each point is taken. A compromise solution is to address a certain fraction of the probability matrix to reduce the number of iterations. It has been demonstrated that reasonable results can be obtained when the probability matrix is addressed in increments of 5% or less. In this paper we demonstrate the use of an IRT on the reconstruction of multiple grey level images using limited data sets of from four to sixteen projections.
MR susceptibility distortion quantification and correction for stereotaxy
Author(s):
Thilaka S. Sumanaweera;
Gary H. Glover;
Thomas O. Binford;
John R. Adler
Show Abstract
We demonstrate a new in-vivo correction scheme for the non-linear, shape dependent spatial distortion in MR images due to magnetic susceptibility variations. Geometric distortion at the air/tissue and tissue/bone boundaries before and after the correction is quantified using a phantom. Assuming CT is distortion-free, MR images were compared to CT images of the phantom. Edges of the images were detected. Using fiducials, transformation from CT to MR was determined. CT edges were projected onto MR planes. Corrected and uncorrected MR edges were compared with CT edges. Magnetic susceptibility of cortical bone was measured using a Superconducting Quantum Interference Device (SQUID) magnetometer and found to be -8.86 ppm which is quite similar to that of tissue (-9 ppm). As expected, the distortion at the bone/tissue interface was negligible while that at the air/tissue interface created displacements of about 2.0 mm with a 1.5 T main magnetic field and a 3.13 mT/m gradient field. This is a significant value if MR images are used to localize targets with the degree of accuracy expected for stereotaxic surgery. Our correction scheme diminished the errors to the same level of accuracy as CT.
Four-dimensional projection reconstruction imaging
Author(s):
William Brian Hyslop;
Ronald K. Woods;
Paul C. Lauterbur
Show Abstract
A brief description of 4-dimensional (4-D) filtered backprojection image reconstruction algorithm that has been developed for use in spectral-spatial magnetic resonance imaging (MRI) is presented. The algorithm uses three successive stages of 2-D filtered backprojection to reconstruct a 4-D image. This approach results in a reduction in computational time on the order of N2 relative to the single-stage technique, where N4 is the number of hypervoxels in the image. Design of digital filters used to filter projections is discussed. Images obtained from simulated data are presented to illustrate the accuracy and potential utility of the technique.
Carotid lesion characterization by synthetic aperture imaging techniques with multi-offset ultrasonic probes
Author(s):
Lorenzo Capineri;
Guido Castellini;
Leonardo F. Masotti;
Santina Rocchi
Show Abstract
This paper explores the applications of a high-resolution imaging technique to vascular ultrasound diagnosis, with emphasis on investigation of the carotid vessel. With the present diagnostic systems, it is difficult to measure quantitatively the extension of the lesions and to characterize the tissue; quantitative images require enough spatial resolution and dynamic to reveal fine high-risk pathologies. A broadband synthetic aperture technique with multi-offset probes is developed to improve the lesion characterization by the evaluation of local scattering parameters. This technique works with weak scatterers embedded in a constant velocity medium, large aperture, and isotropic sources and receivers. The features of this technique are: axial and lateral spatial resolution of the order of the wavelength, high dynamic range, quantitative measurements of the size and scattering intensity of the inhomogeneities, and capabilities of investigation of inclined layer. The evaluation of the performances in real condition is carried out by a software simulator in which different experimental situations can be reproduced. Images of simulated anatomic test-objects are presented. The images are obtained with an inversion process of the synthesized ultrasonic signals, collected on the linear aperture by a limited number of finite size transducers.
Benchmark solution for ultrasonic imaging of tumors
Author(s):
Mark K. Hinders;
Ta-Ming Fang;
B. Rhodes;
J. Collins;
M. McNaughton Collins;
Guido V.H. Sandri
Show Abstract
Ultrasonic imaging of tumors in the human body requires benchmark scattering solutions in order to characterize most accurately the size, location, and physical makeup of the tumor. In this paper, the scattering of ultrasound from spherical tumors in the human body is investigated theoretically. Both the tumor and the surrounding tissue are considered to be lossy elastic media, and the problem of the scattering of plane compressional elastic waves from an elastic sphere is solved analytically. With exact expressions for the fields scattered by the spherical tumor, the angular distribution of scattered energy is used to derive an ultrasonically measurable backscatter coefficient for the tumor. The analysis is general in that no restriction is placed on either the size of the tumor or the range of values of either the tissue parameters or the frequency of the ultrasound waves. The behavior of the scattered energy, as well as the ultrasonic backscatter coefficient, is then investigated numerically for representative values of tumor and surrounding tissue material parameters. Useful phenomena for ultrasonic imaging of tumors are also discussed.
Practical real-time deconvolution and image enhancement of medical ultrasound
Author(s):
Nathan Cohen
Show Abstract
A variety of effects inherent to the instrument and propagation medium degrade image quality in medical ultrasound. These effects include: sidelobes and clutter, blurring, and phase aberration. Dynamic focusing and dynamic apodization have reduced these problems at the expense of optimum resolution and cost effectiveness; smearing is still a part of ultrasonic images. We report an image processing approach in which the aforementioned effects and constraints were rectified through digital deconvolution image enhancement. The point spread function was mapped onto a position-invariant surface and deconvolution was done using a variant of CLEAN. A new method of artifact mitigation was devised called comparison masking. The results indicate increased resolution, by up to five times, on a GE RT3600 with B-mode. Sidelobe reduction is evident. The results may be obtained by using either a PC or a parallel processor in real-time. Deblurring may now be considered a viable option for real- time medical ultrasound and other echo-graphic modalities.
Stochastic reconstruction of incomplete data sets using Gibbs priors in positron emission tomography
Author(s):
Chin-Tu Chen;
Oscar H. Kapp;
Wing H. Wong
Show Abstract
Statistical method for image reconstruction in positron emission tomography (PET) have been utilized with increasing frequency in recent years because of their potential for yielding improved image quality. Stochastic techniques such as the inexact reconstruction technique (IRT) have provided a fruitful approach to the problem of image reconstruction with only a limited number of projection views by applying an iterative approach, with certain constraints, to the treatment of backprojected probability matrices. We have combined the use of the IRT with a new Bayesian model developed recently in our laboratories which employs a Gibbs prior that incorporate some prior information to describe the spatial correlation of neighboring regions and takes into account the effect of the limited spatial resolution as well. This model incorporates continuous values for `line sites' in order to avoid computational difficulties in the determination of point estimate of the image. In addition, we use a square-root transformation for Poisson intensity allowing ready incorporation into the Gibbs formulation. The method of iterative conditional averages was used for computing the point estimates. A preliminary study showed promising results with the use of data from only 8 projection angles.
Comparative study of real-time deconvolution methods for medical imaging
Author(s):
Nathan Cohen;
Guido V.H. Sandri
Show Abstract
We report an analysis of the relative time scales and accuracies attainable with three deconvolution methods which may be implemented in real-time deconvolution schemes. These methods are: iterative point deconconvolution, maximum entropy method, and the method of hyperdistributions. The first method is considerably faster, and its artifacts may be mitigated by comparison masking. However, although it replicates morphology well, it is not photometric -- it does not derive peak intensities with high accuracy. If these three methods are indications of computational efficiency for deconvolution, it is likely that iterative point deconvolution will dominate cost-effective strategies for deblurring in real-time modalities.
Blind deconvolution of 2D and 3D fluorescent micrographs
Author(s):
Vijaykumar Krishnamurthi;
Yi-Hwa Liu;
Timothy J. Holmes;
Badrinath Roysam;
James N. Turner
Show Abstract
This paper presents recent results of our reconstructions of 3-D data from Drosophila chromosomes as well as our simulations with a refined version of the algorithm used in the former. It is well known that the calibration of the point spread function (PSF) of a fluorescence microscope is a tedious process and involves esoteric techniques in most cases. This problem is further compounded in the case of confocal microscopy where the measured intensities are usually low. A number of techniques have been developed to solve this problem, all of which are methods in blind deconvolution. These are so called because the measured PSF is not required in the deconvolution of degraded images from any optical system. Our own efforts in this area involved the maximum likelihood (ML) method, the numerical solution to which is obtained by the expectation maximization (EM) algorithm. Based on the reasonable early results obtained during our simulations with 2-D phantoms, we carried out experiments with real 3-D data. We found that the blind deconvolution method using the ML approach gave reasonable reconstructions. Next we tried to perform the reconstructions using some 2-D data, but we found that the results were not encouraging. We surmised that the poor reconstructions were primarily due to the large values of dark current in the input data. This, coupled with the fact that we are likely to have similar data with considerable dark current from a confocal microscope prompted us to look into ways of constraining the solution of the PSF. We observed that in the 2-D case, the reconstructed PSF has a tendency to retain values larger than those of the theoretical PSF in regions away from the center (outside of those we considered to be its region of support). This observation motivated us to apply an upper bound constraint on the PSF in these regions. Furthermore, we constrain the solution of the PSF to be a bandlimited function, as in the case in the true situation. We have derived two separate approaches for implementing the constraint. One approach involves the mathematical rigors of Lagrange multipliers. This approach is discussed in another paper. The second approach involves an adaptation of the Gershberg Saxton algorithm, which ensures bandlimitedness and non-negativity of the PSF. Although the latter approach is mathematically less rigorous than the former, we currently favor it because it has a simpler implementation on a computer and has smaller memory requirements. The next section describes briefly the theory and derivation of these constraint equations using Lagrange multipliers.
Algorithms for 3D brightfield microscopy
Author(s):
Byron H. Willis;
Badrinath Roysam;
James N. Turner;
Timothy J. Holmes
Show Abstract
We have developed image reconstruction algorithms for generating 3-D renderings of biological specimens from brightfield micrographs. The algorithm presented here is founded on the maximum likelihood estimation theory where steepest ascent and conjugate gradient techniques are used to optimize the solution to the multidimensional equation. The estimation problem posed is that of reconstructing the optical density, or linear attenuation coefficients, similar to that of computed tomography, under the simplifying assumption of geometric optics. We assume white Gaussian noise corrupts the signal generating a Gaussian distributed signal according to the modeling of the system impulse response. One of the challenges of the algorithms presented here is in restoring the values within the missing cone region of the system optical transfer function. The algorithm and programming are straightforward and incorporate standard Fourier techniques. The theoretical development of the algorithms is outlined. Simulations of reconstructions using this technique are currently being performed.
Application of 3-D digital deconvolution to optically sectioned images for improving the automatic analysis of fluorescent-labeled tumor specimens
Author(s):
Stephen J. Lockett;
Kenneth A. Jacobson;
Brian Herman
Show Abstract
The analysis of fluorescent stained clusters of cells has been improved by recording multiple images of the same microscopic scene at different focal planes and then applying a three dimensional (3-D) out of focus background subtraction algorithm. The algorithm significantly reduced the out of focus signal and improved the spatial resolution. The method was tested on specimens of 10 micrometers diameter ((phi) ) beads embedded in agarose and on a 5 micrometers breast tumor section labeled with a fluorescent DNA stain. The images were analyzed using an algorithm for automatically detecting fluorescent objects. The proportion of correctly detected in focus beads and breast nuclei increased from 1/8 to 8/8 and from 56/104 to 81/104 respectively after processing by the subtraction algorithm. Furthermore, the subtraction algorithm reduced the proportion of out of focus relative to in focus total intensity detected in the bead images from 51% to 33%. Further developments of these techniques, that utilize the 3-D point spread function (PSF) of the imaging system and a 3-D segmentation algorithm, should result in the correct detection and precise quantification of virtually all cells in solid tumor specimens. Thus the approach should serve as a highly reliable automated screening method for a wide variety of clinical specimens.
Three-dimensional image formation in confocal fluorescence microscopy
Author(s):
Min Gu;
Colin J. R. Sheppard
Show Abstract
For a confocal fluorescence microscope with annular pupils, the three-dimensional (3-D) image formation has been analyzed in terms of the three-dimensional optical transfer function (OTF). Based on the 3-D OTF, we have calculated the optical sectioning strength by considering the axial response. In addition, the effects of the size of the central obstruction of the lens and the detector have been investigated. In order to avoid the negative tail and the missing cone in the OTF, we have introduced optical fibers into the system. In this fiber- optical confocal scanning microscope, the illumination is from a fiber tip and the signal from the scan point is collected by another fiber and delivered to the detector. The optimum relationship of the central obstruction of the objective to the fiber spot size is presented.
Three-dimensional microstructure of fiber-reinforced composites
Author(s):
Geoff Archenhold;
Ashley R. Clarke;
Nic Davidson
Show Abstract
The orientation of glass and carbon fibers in polymer composites has been investigated using both physical and optical sectioning. We have successfully automated the analysis of large area sections containing thousands of fibers where there is good contrast between fibers and polymer matrix. Recently, we have investigated the application of the laser scanning confocal microscope for the determination of 3-D orientations of glass fibers in polyoxymethylene. Initial data are presented together with a discussion on the preprocessing of the subsurface image fields necessary for automating the confocal process.
Sampling theorem for square-pixel image data
Author(s):
Frederick Lanni;
George J. Baxter
Show Abstract
High-efficiency CCD array cameras now in use in microscopy have essentially 100% active surface area, so that image pixel values are contiguous-square integrals rather than point samples of the image field. An invertible linear transformation is derived for recovery of the point-sampled image field, the values of which would then be used in further processing.
Improvements of imaging quality from spread-light analysis in the detector plane of confocal microscopes
Author(s):
Fred N. Reinholz;
Wolfgang Schuett;
Ullrich Waldschlaeger;
Gerald Gruemmer
Show Abstract
The light intensity distribution in detection planes of laser scanning microscopes contains more information on the object than that which is obtained from the detection of the intensity on the optical axis with ordinary confocal microscopes. We look for simple and fast algorithms to use this additional information for the improvement (e.g., for more precise measurements of positions of edges) of confocal images. Characteristic properties of intensity distribution curves like positions and heights of side lobes are obtained and its values are used for mathematical or logical operations. Quantities resulting from those procedures can show a more sensitive behavior than the usual confocal signal. In this way these quantities are suitable to act as additional parameters in the image processing of confocal images. The example of the measurement of the position of a slit is explained in detail. Some experimental aspects are discussed and an outlook on the effect of defocus is given.
Characterization of imaging from a three-dimensional optical microscope
Author(s):
David F. Abbott;
Keith A. Nugent
Show Abstract
Characterization of a three-dimensional optical microscope is essential before any image processing can be performed to deblur optical section images. The conventional approach has been to measure the entire 3-D point spread function (PSF) of the system. In this paper, we discuss a new approach which requires only three 2-D defocused PSFs. With these measurements, it is found to be possible to determine the spherical aberration in a system. Once the system is characterized, 3-D image processing can proceed using the 3-D point spread function. A sampling theorem has been proposed, and is reviewed here, which provides conditions for the validity of linear image processing. We discuss and present some of the ramifications of this result on simulated images using measured point spread functions.
Compensating for depth-dependent light attenuation at 3-D imaging with a confocal microscope
Author(s):
Nils R.D. Aslund;
Anders Liljeborg
Show Abstract
An account has been given previously of a 3-D image processing method to compensate for depth dependent light attenuation in images from a confocal microscope, working in the epifluorescence mode. A basic assumption is that there are regions of the specimen that are homogeneous in the sense that the attenuation is constant within the region. It is shown that a stack of 2-D histograms of adjacent images usually shows distinguishable features through which homogeneous regions of this kind can be traced. It is also shown how the attenuation factor of the region is obtained. Its inverse is the correction factor applicable to the region. The region may be extracted from the stack to be dealt with separately. The method has been further developed by introducing a dynamic display procedure. It makes both the search and the extraction more efficient. Further, techniques have been implemented to perform the compensation automatically by changing the PM tube voltage during the recording using computer control. In the present paper an account is given on these improvements. A review is also given of some fundamentals of the method and of an application of the method.
Point-spread sensitivity analysis for 3-D flourescence microscopy
Author(s):
Chrysanthe Preza;
John M. Ollinger;
James G. McNally;
Lewis J. Thomas Jr.
Show Abstract
Previous empirical results suggest that the use of an experimentally determined point-spread function (PSF) instead of a theoretical one improves reconstructions of three-dimensional (3- D) microscopic objects from optical sections. The microscope's PSF is usually measured by imaging a small fluorescent bead. There is a tradeoff in this measurement: very small beads are dim and bleach rapidly, while larger beads are a poorer approximation to a point source. We have simulated the effect of the bead on the shape of the PSF by convolving a theoretically determined PSF (of a 40 X 1.0 N.A. oil-immersion lens) with spheres of varying diameters. Simulated data were generated with a 3-D phantom and the theoretical PSF, which is defined to be the `true' PSF for the simulation. Reconstructions of the phantom were obtained with each of the theoretical PSFs obtained from the beads using a regularized linear least-squares method. Results show a significant drop (more than 50%) in the signal-to-noise ratio of the reconstructions for beads with diameter larger than 0.22 micrometers . These results suggest that the bead used in the PSF measurement should have a diameter less than 30% of the diameter of the first dark ring of the infocus two-dimensional (2-D) PSF. This study quantifies the tradeoff between the quality of the reconstructions and the bead size used in the PSF measurement.
Experiments with an operator model of a confocal microscope
Author(s):
Olle Seger;
Reiner Lenz
Show Abstract
This paper gives an overview of operator-based models that seem to be especially interesting for laser scanning microscopy. These methods were developed by Nazarathy et. al. in a series of papers about 10 years ago. In these papers they demonstrated that both the operators and Gaussian beams can be represented by matrices. We implemented the operator algebra in Mathematic and we also show by some simple examples how to analyze paraxial systems. A number of empirical experiments have also been performed to verify the validity of the model.
Signal strength and its effects on processing of confocal images
Author(s):
Colin J. R. Sheppard;
Min Gu;
Maitreyee Roy
Show Abstract
The effects of detector size and shape in the imaging properties of confocal microscopes are discussed. The presence of stray light in the optical system, shot noise on the beam and detector performance limit the signal to noise ratio available. The effects on the noise performance of the extended-focus (mean) and autofocus (peak) algorithms for forming image projections are presented. Consideration of these various parameters allows the microscope user to obtain the best performance from his instrument for particular applications.
Unsupervised noise removal algorithms for 3-D confocal fluorescence microscopy
Author(s):
Badrinath Roysam;
Anoop K. Bhattacharjya;
Chukka Srinivas;
Donald H. Szarowski;
James N. Turner
Show Abstract
Fast algorithms are presented for effective removal of the noise artifact in 3-D confocal fluorescence microscopy images of extended spatial objects such as neurons. The algorithms are unsupervised in the sense that they automatically estimate and adapt to the spatially and temporally varying noise level in the microscopy data. An important feature of the algorithms is the fact that a 3-D segmentation of the field emerges jointly with the intensity estimate. The role of the segmentation is to limit any smoothing to the interiors of regions and hence avoid the blurring that is associated with conventional noise removal algorithms. Fast computation is achieved by parallel computation methods, rather than by algorithmic or modelling compromises. The noise-removal proceeds iteratively, starting from a set of approximate user- supplied, or default initial guesses of the underlying random process parameters. An expectation maximization algorithm is used to obtain a more precise characterization of these parameters, that are then input to a hierarchical estimation algorithm. This algorithm computes a joint solution of the related problems corresponding to intensity estimation, segmentation, and boundary-surface estimation subject to a combination of stochastic priors and syntactic pattern constraints. Three-dimensional stereoscopic renderings of processed 3-D images of murine hippocampal neurons are presented to demonstrate the effectiveness of the method. The processed images exhibit increased contrast and significant smoothing and reduction of the background intensity while avoiding any blurring of the neuronal structures.
Motion-compensated enhancement of medical image sequences
Author(s):
Ajit Singh
Show Abstract
We describe a recursive technique to perform motion compensated enhancement of image- sequences. This technique incorporates an explicit noise model as well as an optic-flow based model for temporal evolution of image intensity. Based on these models, it computes the optimal estimate of the instantaneous image intensity in an incremental fashion -- the estimate improves over time. Furthermore, the technique does not blur moving regions in the imagery. We have applied it to enhance a wide variety of medical image sequences used in fluoroscopy, cine-angiography, etc. In these x-ray based procedures, our technique offers a twofold promise of enhancing image quality while maintaining the current radiation dosage, and reducing radiation dosage while maintaining the current image quality. The computational framework of our technique is comprised of (1) an estimation-theoretic technique to recover the instantaneous optic-flow field without blurring its discontinuities, (2) a warping mechanism that eliminates the interframe motion between two successive images, and (3) a Kalman filter that performs temporal filtering to improve the image quality in an incremental fashion.
Left-ventricle motion modeling and analysis by adaptive-size physically-based models
Author(s):
Wen-Chen Huang;
Dmitry B. Goldgof
Show Abstract
This paper presents a new physically based modeling method which employs adaptive-size meshes to model left ventricle (LV) shape and track its motion during cardiac cycle. The mesh size increases or decreases dynamically during surface reconstruction process to locate nodes near surface areas of interest and to minimize the fitting error. Further, presented with multiple 3-D data frames, the mesh size varies as the LV undergoes nonrigid motion. Simulation results illustrate the performance and accuracy of the proposed algorithm. Then, the algorithm is applied to the volumetric temporal cardiac data. The LV data was acquired by the 3-D computed tomography scanner. It was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 by 128 by 118) images taken through the heart cycle.
Left-ventricle wall motion tracking using curvature properties
Author(s):
Kambhamettu Chandra;
Dmitry B. Goldgof
Show Abstract
This paper presents the complete implementation of the new algorithm for tracking points on the left ventricle (LV) surface from volumetric cardiac images. We define the local surface stretching as an additional motion parameter of nonrigid transformation. Stretching is constant at all points on the surface for homothetic motion, or follows a polynomial function of certain order (linear in our implementation) in conformal motion. The wall deformation and correspondence information between successive frames of LV in a heart cycle are considered important in evaluating heart behavior and improved diagnosis. We utilize small motion assumption between consecutive frames, hypothesize all possible correspondences, and compute curvature changes for each hypothesis. The computed curvature change is then compared with the one predicted by conformal motion assumption for hypotheses evaluation. We demonstrate the improved performance of the new algorithm utilizing conformal motion with linear stretching assumption over constant stretching assumption on simulated data. Then, the algorithm is applied to real cardiac (CT) images and the stretching of the LV wall is determined. The data set used in our experiments was provided by Dr. Eric Hoffman at University of Pennsylvania Medical school and consists of 16 volumetric (128 by 128 by 118) images taken through the heart cycle.
Human motion analysis with detection of sub-part deformations
Author(s):
Juhui Wang;
Guy Lorette;
Patrick Bouthemy
Show Abstract
One essential constraint used in 3-D motion estimation from optical projections is the rigidity assumption. Because of muscle deformations in human motion, this rigidity requirement is often violated for some regions on the human body. Global methods usually fail to bring stable solutions. This paper presents a model-based approach to combating the effect of muscle deformations in human motion analysis. The approach developed is based on two main stages. In the first stage, the human body is partitioned into different areas, where each area is consistent with a general motion model (not necessarily corresponding to a physical existing motion pattern). In the second stage, the regions are eliminated under the hypothesis that they are not induced by a specific human motion pattern. Each hypothesis is generated by making use of specific knowledge about human motion. A global method is used to estimate the 3-D motion parameters in basis of valid segments. Experiments based on a cycling motion sequence are presented.
Sinogram resolution recovery using information gained through detector motion
Author(s):
Miles N. Wernick;
Chin-Tu Chen
Show Abstract
Positron emission tomography (PET), as a biomedical imaging modality, is unique in its ability to provide quantitative information regarding biological function in a living subject. Unfortunately, its use has been hampered by the poor spatial resolution of the images produced, resulting primarily from the relatively large detectors used to acquire the tomographic measurements. In this paper, we show that by applying signal recovery to the data obtained by moving the detection system during the course of the measurement process, dramatic improvement in image quality can be obtained when detector size is, indeed, the factor limiting spatial resolution. The method of projections onto convex sets is used to recover (deblur) the sinogram, from which the image is reconstructed by conventional filtered backprojection. By making use of filtered backprojection in the reconstruction step, the computational burden commonly associated with iterative signal recovery is avoided; the proposed method adds only a few seconds to the total processing time. Simulation results demonstrate that the method is robust to misspecification of the point spread functions of the detection system as well as to the high levels of quantum noise inherent in PET.
Motion tracking of the left-ventricular surface
Author(s):
Amir A. Amimi;
Peng-Cheng Shi;
James S. Duncan
Show Abstract
A framework for tracking the motion of surfaces with specific applications to the left- ventricular endocardial wall is outlined. We use an elastic model of the object with constraints on the types of motion for tracking the movement. Conformal stretching is measured from a system of coordinates based on the principal directions of a surface patch, and in addition, an energy for bending of a 3-D surface is defined. Once initial match vectors are obtained from the bending and stretching model, membrane smoothing with confidences optimizes the flow estimates. To this end, a linear vector equation in terms of components of flow vectors is derived which must be satisfied at all nodes of a finite element grid. The coupled system of equations are solved with relaxation techniques. Applications of the algorithms to real and simulated data are given at the end.
Image registration of multimodality 3D medical images by chamfer matching
Author(s):
Hongjian Jiang;
Kerrie S. Holton;
Richard A. Robb
Show Abstract
Images from computed tomography (CT), magnetic resonance (MR) imaging, positron emission tomography (PET), and single photon emission computed tomography (SPECT), etc., provide complementary characteristic and diagnostic information. A parametric Chamfer Matching method is used for fast and accurate registration of the images from different medical imaging modalities. Surfaces are initially extracted from two images to be matched using semi-automatic segmentation software, and then these surfaces are used as common features to be matched. A distance transformation is performed for one surface image, and an error function is developed using the distance-image to evaluate the matching error. The geometric transformation includes three-dimensional translation, rotation, and scaling parameters to accommodate images of different position, orientation, and size. The matching process involves searching the multi-parameter space to find the fit which will minimize the error function. The local minima problem is addressed by using a large number of starting points. A pyramid multiresolution approach is employed to speed up both the distance transformation and the multi-parameter minimization processes. Robustness in handling noise is enhanced by using multiple thresholds approach imbedded in the multi-resolution process. Human intervention is not necessary.
Data structures for multimodality imaging: concepts and implementation
Author(s):
Oscar Emanuel Ch Mealha;
Antonio Sousa Pereira;
Maria Beatriz Sousa Santos
Show Abstract
The integration of data coming from different imaging modalities is something to take into account, due to the importance it can have in the development of a fast and reliable diagnosis by the health staff. In the medical imaging field, computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), and single photon emission computed tomography (SPECT) are examples of devices that generate 3-D data. Digital subtraction angiography (DSA) or ultrasound (US) output 2-D data, from which its possible to reconstruct 3-D data. An important fact is that 3-D space is common to all these devices and they are all capable of producing large amounts of data. Prior to display or even data integration, matching the various 3-D spaces has to be achieved with some specific technique, according to the anatomical region under examination. The augmented octree, an extension of the linear octree, is used for data integration; its properties can help to overcome some of the constraints that occur in medical imaging. To be fully accepted by the specialist, the display and manipulation of multimodality data must be interactive and done in real-time, or at least in `nearly' real-time. Parallel architectures seem to be a solution for some computation intensive applications, and so an implementation of the linear octree encoding process was developed on a 16 Transputer machine.
Fiducial point localization in multisensor 3D surface scanning
Author(s):
Gulab H. Bhatia;
Arjun Godhwani;
Michael W. Vannier M.D.
Show Abstract
The estimation of fiducial point locations on surfaces is important in many close range photogrammetric applications, including biostereometrics, non-contact stress analysis, industrial metrology, and others. We have developed various methods for estimation of surface fiducial points based on optically sensed range maps obtained with a multisensor 3-D scanner originally designed for portrait sculpture. We previously described an algorithm based on a Kalman filter, a recursive spatially variant optimal estimator. The results demonstrated that accurate localization of surface landmarks can be readily achieved. Two new methods for estimation of fiducial point locations on surfaces were devised for the 3-D scanner. This high- speed non-contact 3-D scanner uses 6 CID cameras and 6 pattern projectors. Each projector incorporates coded bar patterns which are projected onto the object to be scanned. These patterns are captured in the camera image, mensurated, and tagged to identify corresponding projector profiles for each profile observed in the camera. The set of points belonging to a profile on the image plane are entered into a 2-D to 3-D solution, an analytical procedure where each image point (2-D point) is mapped to a point in space (3-D point). Three- dimensional surface points from all views are resampled onto a global cylindrical grid. Both range and texture maps are computed and stored. The range and texture information from all views is averaged and displayed on a 3-D graphics workstation (Silicon Graphics 4-D/340). Fiducial point localization achieved via the two new methods employing variations of Kalman filtering.
Biological object location on light microscopy images
Author(s):
Mylene Roussel;
D. Fontaine;
Xiaowei Tu
Show Abstract
In this article, we present a sequence of processing used to locate objects presented on light microscopy images. An application is developed to identify toxic algae, useful in water quality management. The proposed methodology, applicable to all biological images, is to be considered as a preprocessing for future quantitative studies. The work is composed of different steps, each of them allowing extraction of useful information from images. Thus, from a raw image we can locate the objects of interest.
Use of global information and a priori knowledge for segmentation of objects: algorithms and applications
Author(s):
Guenter Wolf
Show Abstract
The segmentation of well-contrasted objects is no problem and the success of several algorithms is proved by applications. But if the objects are poorly contrasted, it is difficult to find a threshold, which leads to a right object segmentation and, in many cases, (e.g., touching or overlapping objects) a threshold for the right segmentation of the image into isolated object regions does not exist. Some methods are presented which can help to overcome these problems. Global information and a priori knowledge are used for the selection of an optimum segmentation threshold (a threshold is selected independently for each object). An algorithm for the separation of conglomerates of convex objects is presented based on contour information (information about the shape of the objects). The main characteristics of this algorithm are: construction of a recursive convexity polygon, determination of fuzzy features for the description of possible parts of the conglomerate, and dynamic programming. Several applications demonstrate the use of further information about shape, grey value distribution, and topology.
Selecting a threshold for two-dimensional echocardiograms
Author(s):
Shriram V. Revankar;
David B. Sher
Show Abstract
Thresholding has been extensively used to separate myocardium from two-dimensional echocardiograms. We present a critical review of the existing threshold based methods, and propose a new interactive method that uses the known average wall thickness at the indicated region to determine a reference global threshold. The thickness of the wall as seen in the thresholded image is important for quantization of cardiac parameters such as ventricular volume, ejection fraction, etc. In echocardiograms, owing to the characteristics of the imaging environment, the cardiac wall thickness depends on the threshold. Many existing methods concentrate on extracting continuous wall regions. In our scheme we select a threshold that yields walls of proper thickness, and then attempt to get a continuous region. In our scheme, a user picks two points on a clearly visible section of the wall where the thickness is known. We compute a threshold by analyzing the regional histogram at that wall section, so that average thickness of the regional thresholded pattern is equal to the known wall thickness. This gives a reference threshold that is varied locally by regional three-dimensional morphology, to get local thresholds. The thresholding scheme suppresses noise and generates smooth boundaries.
Learning edge-defining thresholds for local binary segmentation
Author(s):
Leiguang Gong;
Casimir A. Kulikowski;
Reuben S. Mezrich M.D.
Show Abstract
Selecting a globally effective threshold to define edges for local binary thresholding and segmentation of images presents major problems given the significant variability in intensity and edge statistics from image to image and study to study. Previously reported results of applying binary local thresholding have depended on the careful empirical choice of a threshold range adapted to a particular class of images. We have developed two new systematic methods that learn the edge-defining threshold from the gradient image generated by applying a gradient operator. The first method minimizes a criterion function, and the second takes advantage of local constancy properties of the intensity threshold as a function of a selected edge-defining threshold. An edge-defining threshold is then obtained for each sub- image, and a global threshold derived from them for the whole image. Experiments with MR images from phantoms and various human and animal studies have shown the effectiveness of this approach.
Identification of cortex in magnetic resonance images
Author(s):
John W. VanMeter;
Peter A. Sandon
Show Abstract
The overall goal of the work described here is to make available to the neurosurgeon in the operating room an on-line, three-dimensional, anatomically labeled model of the patient brain, based on pre-operative magnetic resonance (MR) images. A stereotactic operating microscope is currently in experimental use, which allows structures that have been manually identified in MR images to be made available on-line. We have been working to enhance this system by combining image processing techniques applied to the MR data with an anatomically labeled 3-D brain model developed from the Talairach and Tournoux atlas. Here we describe the process of identifying cerebral cortex in the patient MR images. MR images of brain tissue are reasonably well described by material mixture models, which identify each pixel as corresponding to one of a small number of materials, or as being a composite of two materials. Our classification algorithm consists of three steps. First, we apply hierarchical, adaptive grayscale adjustments to correct for nonlinearities in the MR sensor. The goal of this preprocessing step, based on the material mixture model, is to make the grayscale distribution of each tissue type constant across the entire image. Next, we perform an initial classification of all tissue types according to gray level. We have used a sum of Gaussian's approximation of the histogram to perform this classification. Finally, we identify pixels corresponding to cortex, by taking into account the spatial patterns characteristic of this tissue. For this purpose, we use a set of matched filters to identify image locations having the appropriate configuration of gray matter (cortex), cerebrospinal fluid and white matter, as determined by the previous classification step.
Three-dimensional adaptive split-and-merge method for medical image segmentation
Author(s):
Jin-Shin Chou;
Chin-Tu Chen;
Shiuh-Yung James Chen;
Wei-Chung Lin
Show Abstract
We have developed a three-dimensional image segmentation algorithm using adaptive split- and-merge method. The framework of this method is based on a two-dimensional (2-D) split- and-merge scheme and the region homogeneity analysis. Hierarchical oct-tree is used as the basic data structure throughout the analysis, analogous to quad-tree in the 2-D case. A localized feature analysis and statistical tests are employed in the testing of region homogeneity. In feature analysis, standard deviation, gray-level contrast, likelihood ratio, and their corresponding co-occurrence matrix are computed. Histograms of the near-diagonal elements of the co-occurrence matrix are calculated. An optimal thresholding method is then applied to determine the desired threshold values. These values are then used as constraints in the tests, such that decision of splitting or merging can be made.
Computer detection of stellate lesions in mammograms
Author(s):
W. Philip Kegelmeyer Jr.
Show Abstract
The three primary signs for which radiologists search when screening mammograms for breast cancer are stellate lesions, microcalcifications, and circumscribed lesions. Stellate lesions are of particular importance, as they are almost always associated with a malignancy. Further, they are often indicated only by subtle architectural distortions and so are in general easier to miss than the other signs. We have developed a method for the automatic detection of stellate lesions in digitized mammograms, and have tested them on image data where the presence or absence of malignancies is known. We extract image features from the known images, use them to grow binary decision trees, and use those trees to label each pixel of new mammograms with its probability of being located on an abnormality. The primary feature for the detection of stellate lesions is ALOE, analysis of local oriented edges, which is derived from an analysis of the histogram of edge orientations in local windows. Other features, based on the Laws texture energy measures, have been developed to respond to normal tissue, and so improve the false alarm performance of the entire system.
Improved no-moving-parts video-rate confocal microscope
Author(s):
Seth R. Goldstein;
Thomas Hubin;
Thomas G. Smith Jr.
Show Abstract
Several years ago our research program developed a video-rate confocal microscope with no moving parts, based on synchronizing and aligning the scan of an image dissector tube (IDT) with the return light resulting from an acousto-optically scanned laser beam. Improvements on the original system have recently been completed. The laser scan is now brought into the Nikon Diaphot inverted microscope through the epi-illumination port and the laser power has been substantially increased. All beam shaping in the laser scanner is done with prisms instead of cylindrical lenses to reduce aberrations. The IDT is located at the side video output port at the end of a very efficient light path. The new system is described along with some results obtained in the laboratory.
Use of UV-excitation for confocal fluorescence microscopy in a conventional beam-scanning instrument
Author(s):
Kjell Carlsson;
Karin Mossberg;
Johannes P. Helm;
Johan Philip
Show Abstract
By making only minor modifications, we have adapted a conventional confocal scanning laser microscope for the recording of UV-excited fluorescence. An external argon ion laser provides the wavelengths 334, 351, and 364 nm for specimen illumination. In addition to substituting some optical components to obtain improved transmission and reflection properties in the UV, we have also adjusted the ray-path to compensate for the severe chromatic aberration of most conventional microscope objectives in the UV. We have also tested mirror objectives, which are inherently free from chromatic aberrations. However, since these objectives are not of the immersion-type, and furthermore have rather low numerical apertures, they are of limited value in biomedical applications. Using the modified instrument, we have recorded specimens labeled with AMCA and Fluoro-Gold. At present the instrument is capable of recording optical sections with a thickness of 1.5 micrometers when using an oil-immersion objective with a numerical aperture of 1.25.
Confocal differential phase contrast adaptor for beam scanning confocal microscope
Author(s):
Victor Chen;
Edward Hurley
Show Abstract
We demonstrate the implementation of the confocal differential phase contrast imaging technique to a commercial beam scanning confocal microscope. Using this imaging mode on a scanning stage confocal microscope the complex texture of reflectance images seem very much simplified and `improved.' However, as a microscope user rather than a microscope developer, the accessible confocal microscopes usually employ beam scanning. We felt it important to implement this contrast mode to the beam scanning instrument so that useful reflectance mode confocal images could more easily be obtained. We implemented the optical scheme described by Wilson. Cogswell and Sheppard further show on a scanning beam confocal microscope the power of the differential phase contrast mode for the study of biological specimens by implementing `unsharp' masking to selectively enhance image detail in difficult specimens. We are able to do this type of enhancement to a limited extent using our implementation of the confocal differential phase contrast technique to a commercial beam scanning confocal microscope.
Defocus response of phase-sensitive heterodyne microscopes
Author(s):
Michael G. Somekh
Show Abstract
This paper discusses the imaging properties of scanning heterodyne microscope systems. In particular, the response to objects out of the focal plane is considered. A differential system is described (the indirect interference differential interferometer) which has confocal properties. This system gives extended focus imaging while retaining phase information. In addition, it is capable of imaging phase objects while rejecting much of the information from closely spaced planes. The system should thus be capable of three-dimensional phase imaging.
High-resolution confocal transmission microscope, Part II: determining image position and correcting aberrations
Author(s):
John W. O'Byrne;
Carol J. Cogswell
Show Abstract
In order to produce high resolution images of complex biological specimens in a confocal transmission microscope, two phenomena must be overcome. Firstly, non-uniform refractive index variations in the specimen cause deflection of the focused image spot in three- dimensional space which presents a problem for the usual confocal approach of imaging onto a fixed pinhole. The second effect is optical aberrations (especially spherical aberration) which arise as a natural function of imaging through the thickness of a refractive specimen. This paper discusses how both effects may be monitored and overcome, while still providing confocal imaging, using a CCD array as the detector
High-resolution confocal transmission microscope, Part I: system design
Author(s):
Carol J. Cogswell;
John W. O'Byrne
Show Abstract
We are investigating using a transmitted light configuration in our experimental confocal microscope, in order to observe highly transparent biological preparations which typically yield little or no signal in reflection. Because refractive index and thickness variations in the specimen often deflect the focused image spot off-axis in a confocal transmission configuration, we are exploring using a CCD array detector in the transmitted beam path instead of a single-pinhole detector. Our method for detecting the brightest pixel values from the CCD diffraction image is shown to give a similar two-point response to that of a confocal fixed-pinhole transmission system. We also discuss the differences between reflection and transmission confocal modes and analyze the relative imaging properties of each.
High-sensitivity confocal imaging with bilateral scanning and CCD detectors
Author(s):
G. J. Brakenhoff;
Koen Visscher
Show Abstract
The high quantum efficiency and high red sensitivity of a charge coupled device (CCD) make it a very suitable detector for confocal fluorescence imaging, especially for applications in biology, where short wavelength illumination may cause undesirable radiation damage. In addition, its high dynamic range matches the high contrast imaging inherent in confocal microscopy, and enables one to record small intensity differences if this dynamic range is well exploited. Applications shown include imaging with bleaching rate and non-linear fluorescence excitation as parameters. The bilateral scan technique permits effective confocal imaging using this type of device. CCDs can be used for confocal imaging both in high speed (up to real- time) applications as well as in the integration mode, where the high sensitivity application areas for this confocal technique are present.
Stage-scanned chromatically aberrant confocal microscope for 3-D surface imaging
Author(s):
Mark Anthony Browne;
Olusola Akinyemi;
Francis Crossley;
Duncan T. B. Stacey
Show Abstract
A method for full-field surface profiling in the tandem scanning confocal microscope has been previously described. The technique utilizes chromatic aberration to produce an extended and color coded focal volume. Planes at different axial depths within this volume correspond to the foci of different wavelengths through the intentionally aberrant system. To determine the height of any point within the field, the confocal detection system must be capable of identifying the wavelength of the most intensely reflected light from that point. Hence the relative sensitivity of the detector and the spectral response of the imaging system are important parameters. We have utilized xenon and mercury arc sources and compared results with various types of objective lens. One potential limitation of this microscope is the fact that the introduction of longitudinal chromatic aberration affects the correction of the optics for plan imaging, i.e., introduces spherical aberrations, but the technique is rapid and produces results well correlated with more conventional techniques. To allow detailed study of the use of a chromatic 3-D probe for surface imaging we have designed and constructed a stage- scanned instrument which uses on-axis optics and is therefore free from spherical aberrations and can be corrected optimally. The instrument is described and results presented for single pass 3-D imaging.
Distortion correction and automatic comparison of electrophoresis images
Author(s):
Qiaofei Wang;
Yrjo A. Neuvo
Show Abstract
Image segmentation is essential to computer-based analysis of two-dimensional electrophoresis gels. In order to detect protein spots, we use nonlinear algorithms such as median and FIR- median hybrid filters to estimate the varying background. It is shown that the detection technique provides simple implementation and effective spot detection. Since there exists nonlinear distortion in a gel image which prevents us from making a correct matching, we use template matching to estimate the horizontal and vertical distortion and propose nonlinear interpolation and decimation to correct it. Practically, isolated small parts of the gel image are selected as windows (templates) which contain certain spot distribution patterns, and the sum of absolute differences is used as the measure of similarity. For convenience of a visual comparison, two gel images are displayed in different colors (or gray levels) and superimposed on the screen. The global patterns of the two images are matched depending on how well the distortion correction has been done.
Maximum-likelihood estimation of restriction-fragment mobilities from 1-D electrophoretic agarose gels
Author(s):
Heather A. Drury;
David G. Politte;
John M. Ollinger;
Philip Green;
Lewis J. Thomas Jr.
Show Abstract
We have developed a technique for finding maximum-likelihood estimates of DNA restriction- fragment mobilities from images of fluorescently stained electrophoretic gels. Gel images are acquired directly using a CCD camera. The likelihood model incorporates the Poisson nature of the photon counts and models the fluorescence intensity as the superposition of Gaussian functions (corresponding to the fragment bands) of varying magnitude and width. An expectation-maximization algorithm is used to find maximum-likelihood estimates of the number of fragments, fragment mobilities, widths of the bands, background contributions, and DNA concentration. This approach has several advantages. Closely spaced and overlapping fragments are accurately resolved into their components. No a priori knowledge of the number or positions of fragments is required. Fragment lengths estimated by the maximum-likelihood method from experimental data were compared to the known lengths of fragments generated from three different restriction digests of bacteriophage (lambda) DNA. Preliminary results using the maximum-likelihood method indicate residual sizing errors on the order of 1%.
Statistical interpretation of texture for medical applications
Author(s):
A. Glen Houston;
Saganti B. Premkumar;
David E. Pitts;
Richard J. Babaian
Show Abstract
Findings from transrectal ultrasound examinations of the prostate are usually based on the echogenicity observed by visual interpretation of the ultrasound image. Hypoechoic areas are typically suspected to be cancerous. Previous studies have indicated that a high percentage (as high as 96%) of cancerous lesions are hypoechoic, but only a moderate percentage (about 50%) of hypoechoic regions are cancerous. We have been investigating statistical measures of texture of digitized ultrasound images of the prostate to assess whether improved accuracies can be achieved for diagnosing prostate cancer and for identifying cancerous lesions. This paper presents our approach as well as results obtained for 17 patients, eight non-cancerous and nine cancerous. The results of a small `blind test,' based on seven subjects, are also presented. Recently, a pathological mount of a prostate cross-section from a prostatectomy was selected as a test case for applying texture analysis to detect prostatic adenocarcinoma. The approach and results are described. The results of both studies are encouraging, but must be considered exploratory due to the small data sets. The results do provide support to the idea that texture information in the prostate is related to a structural change in the gland when carcinoma occurs.
Estimation of fetal gestational age from ultrasound images
Author(s):
Valiollah Salari
Show Abstract
Estimation of fetal gestational age, weight, and determination of fetal growth from the measurements of certain parameters of fetal head, abdomen, and femur have been well established in prenatal sonography. The measurements are made from the two dimensional, B- mode, ultrasound images of the fetus. The most common parameters measured are, biparietal diameter, occipital frontal diameter, head circumference, femur diaphysis length, and abdominal circumference. Since the fetal head has an elliptical shape and the femur has a linear shape, fitting the ellipse on the image of the fetal head, a line on the image of the femur are the tasks of image processing which are discussed in this paper.
Hierarchical shape representation for use in anatomical object recognition
Author(s):
Glynn P. Robinson;
Alan C.F. Colchester;
Lewis D. Griffin
Show Abstract
An efficient scheme for representation of the shape of anatomical and pathological structures is required for intelligent computer interpretation of medical images. We present an approach to the extraction and representation of shape which, unlike previous shape representations, does not require complete boundary descriptions. It is based on the `Delaunay triangulation' and its dual the `Voronoi diagram.' Our method of using this dual leads to both a skeleton description and a boundary description. The basic step in the algorithm is that of deciding whether to treat any pair of neighboring points as adjacent (lying next to each other on the same boundary) or opposite (lying on opposing sides of a skeleton separating two boundaries). The duality of the skeleton and boundary descriptions produced means that the splitting of one object into two separate objects, or the merging of two objects into one, can be easily accomplished.
Shape change analysis of confocal microscope images using variational techniques
Author(s):
Keith A. Bartels;
Alan Conrad Bovik;
Shanti J. Aggarwal;
Kenneth R. Diller
Show Abstract
A technique for modeling shape changes in microscopic images is described. The technique consists of first segmenting the image to locate the specimen and the parametrizing the specimen in the initial image with an orthogonal material coordinate system. The deformation of the material coordinate system caused by the motion of the specimen is then solved for by minimizing an energy functional which is derived here. Results from both synthetic and real two-dimensional images are presented. The foundations of a three-dimensional implementation are given. A two-dimensional implementation is demonstrated while keeping sufficient generality for an application to three-dimensional dynamic confocal microscope images.
Unbiased sampling for 3-D measurements in serial sections
Author(s):
Mark Anthony Browne;
Hong Qiang Zhao;
Vyvyan Howard;
Glenn D. Jolleys
Show Abstract
Previous studies have shown that the `disector' is a 3-D probe which can sample particles uniformly in 3-D space, and that the mean characteristics of the disector sample is an unbiased estimate of the same mean characteristics of all the particles in the reference space, if the disector is `uniform randomly' probed in the space. But to date the technique has been limited to manual operations for particle counting. As a 3-D sampling tool, its potential has not been fully realized due to the difficulties mainly associated with its manual implementation and the subsequent measurements on the resultant disector sample. In this paper, we present an image processing algorithm which can establish a disector sample of convex particles from a set of serial sections. The algorithm enables the automatic implementation of Cavalieri's principle in the environment of an image analyzer and may therefore lead to a direct measurement of particle size distribution.
Interactive tools for morphometry in video microscopy
Author(s):
Daniel DeMenthon;
Sunia Arya;
Larry S. Davis;
Jacob Glaser;
Edmund Glaser
Show Abstract
We describe algorithms for measuring the thickness of neuron dendritic processes and the shape of neuron cell bodies. The design of these tools follows a `semi-automatic' approach. Image processing tools that would fail when applied to the whole image can produce very useful results if the user confines them by hand to small parts of the image, and if the user is given the opportunity to undo or correct the results interactively.
Biomedical image texture analysis based on high-order fractals
Author(s):
Huinian Xiao;
Al Chu;
Kerrie S. Holton;
Richard A. Robb
Show Abstract
Since the fractal dimension alone is not sufficient to characterize natural texture, we explore higher order geometry to accurately identify texture in biomedical images. The calculation of the fractal dimension set is based on the texture description: known as the Pseudo Matrix of the Fractal (PMF). In our research, the variants of the PMF are tested, a set of the fractal parameters are defined, and different discriminant functions are investigated. A new approach to texture classification is described. Using vectors derived from the PMF, the inner products of these normalized vectors obtained from the training groups and the test image form the measures for classification. This method is easily implemented and produces reliable classification results. The new algorithm significantly simplifies the calculation of the fractal dimension set, and the classification of texture in medical images becomes more sensitive and specific. Preliminary results have demonstrated an improved accuracy in classification on one group of eight types of realistic texture data and one set of MRI brain data.
Three-dimensional reconstruction from serial sections using triakis
Author(s):
Kendall Preston Jr.;
Richard Siderits
Show Abstract
Various methods for serial-section reconstruction from images of tissue sections have been investigated using the Apple Macintosh-based program Triakis. Triakis is a three-dimensional mathematical morphology software package that processes 3-D binary data in the FCC (face centered cubic) tessellation by means of LUT (look-up table) manipulations. Tissue sections have been obtained from a breast tumor and traced using PC3-D [Jandel Scientific, Corte Madera, Calif.] were transferred via a software interface to Triakis. Triakis uses the Apple Macintosh polygon filling routine to fill each tracing with binary ones. Next, ranking transforms in the binary FCC tessellation were used in order to demonstrate their capability of smoothly interpolating from one section to the other. The optimum transform is based on the placement of the binarized tissue sections in hexagonal planes and uses three cycles of a rank three transform for the initial interpolation. Irregularities are then removed from the resulting solid by seven cycles of a compound transform. Thereafter, the filled polygons were eroded using Triakis and, from the annular histogram, the spatial interrelationships of malignant cells within the tumor determined.
Detection of osteoporosis by morphological granulometries
Author(s):
Edward R. Dougherty;
Yidong Chen;
Saara Totterman;
Joseph P. Hornak
Show Abstract
Local morphological granulometries are generated by opening an image successively by an increasing family of structuring elements and, at each pixel, keeping an image area count in a fixed-size window about the pixel. After normalization there is at each pixel a probability density, called a `local pattern spectrum,' and the moments of this density are used to classify the pixel according to surrounding texture. The method having been developed for binary images, the present paper applies a gray-scale version of the methodology to detect osteoporosis in magnetic resonance (MR) images of the wrist. Maximum-likelihood classification is used to apply the local-pattern-spectra moment information. Owing to the presence of a continuous intertwined network of bone fibers called trabeculae, when imaged by an MR imaging system a normal region of bone tissue possesses a coarse, grainy texture resulting in characteristic granulometric features. Osteoporosis is a metabolic bone disease typified by a gradual loss of trabecular bone, and this loss is revealed by significant changes in the granulometric features, thereby leading to detection.
VIDA: an environment for multidimensional image display and analysis
Author(s):
Eric A. Hoffman;
Daniel Gnanaprakasam;
Krishanu B. Gupta;
John D. Hoford;
Steven D. Kugelmass;
Richard S. Kulawiec
Show Abstract
Since the first dynamic volumetric studies were done in the early 1980s on the dynamic spatial reconstructor (DSR), there has been a surge of interest in volumetric and dynamic imaging using a number of tomographic techniques. Knowledge gained in handling DSR image data has readily transferred to the current use of a number of other volumetric and dynamic imaging modalities including cine and spiral CT, MR, and PET. This in turn has lead to our development of a new image display and quantitation package which we have named VIDATM (volumetric image display and analysis). VIDA is written in C, runs under the UNIXTM operating system, and uses the XView toolkit to conform to the Open LookTM graphical user interface specification. A shared memory structure has been designed which allows for the manipulation of multiple volumes simultaneously. VIDA utilizes a windowing environment and allows execution of multiple processes simultaneously. Available programs include: oblique sectioning, volume rendering, region of interest analysis, interactive image segmentation/editing, algebraic image manipulation, conventional cardiac mechanics analysis, homogeneous strain analysis, tissue blood flow evaluation, etc. VIDA is a built modularly, allowing new programs to be developed and integrated easily. An emphasis has been placed upon image quantitation for the purpose of physiological evaluation.
Real-time 3D volume rendering technique on a massively parallel supercomputer
Author(s):
Rachael Brady;
Clinton S. Potter
Show Abstract
We present a simple 3-D true color volume rendering technique which we have implemented on a 32,768 processor Connection Machine 2 (CM2). This technique weights the data along the line of sight to produce depth cuing information and control the opacity of the volume. We then perform an orthographic projection where the maximum value or sum is taken along each ray to create the image. Rendering identical sight angles with different opacity settings for the red, green, and blue planes of a true color display device creates a unique color image that is easy to interpret. Rotating the volume for each rendered frame enables the scientist to comprehend the 3-D spatial structure of the data. Our implementation on a 32 k CM2 can rotate, render, and display a 643 volume at the rate of five frames per second. This frame rate can be maintained while the rendering parameters and rotation angles vary. Rendering at high frame rates allows the scientist to draw substantial structural information from large data sets. Our rendering technique has been applied to data volumes from 3-D NMR microscopy, CT scans, electron microscope serial sections, confocal microscopy, and 3- D astrophysical simulations. It has also been used to visualize 3-D data sets that evolve over time.
Maximizing image variance in rendering of volumetric data sets
Author(s):
Bjoern Olstad
Show Abstract
An algorithm is presented for rendering of volumetric data sets. The aim of the algorithm is to maximize the image variance in a volumetric rendering where a three-dimensional data set is projected onto a view plane through the perspective mapping. The pixel values in the rendered image are associated with a variable-sized attribute vector extracted along a line in the volumetric data set. Several algorithms are presented for transforming this variable-sized attribute vector into a fixed-sized attribute vector. The fixed-sized attribute vectors provide a multi-spectral image representation which is processed with the Karhunen-Loeve transformation in order to separate the information content into orthogonal components and ordered according to the associated eigenvalues. The components in the Karhunen-Loeve transform can be displayed individually as intensity images or three components can be selected and mapped into a coloring scheme such as the HSV color model.
Three-dimensional reconstruction of paramecium primaurelia oral apparatus through confocal laser scanning optical microscopy
Author(s):
Francesco Beltrame;
Paola Ramoino;
Marco Fato;
Maria Umberta Delmonte Corrado;
Giampiero Marcenaro;
Tina Crippa Franceschi
Show Abstract
Studies on the complementary mating types of Paramecium primaurelia (Protozoa, Ciliates) have shown that cell lines which differ from each other in mating type expression are characterized by different cell contents, organization, and physiology. Referring to these differences and to the differential rates of food vacuole formation, oral apparatuses of the two mating type cells are assumed to possibly differ from each other in some traits, such as, for instance, in their lengths. In our work, the highly organized oral structures are analyzed by means of a laser scanning confocal optical microscope (CLSM), which provides their 3-D visualization and measurement. The extraction of the 3-D intrinsic information related to the biological objects under investigation can be in turn related to their functional state, according to the classical paradigm of structure to function relationships identification. In our experiments, we acquired different data sets. These are optical slices of the biological sample under investigation, acquired in a confocal situation, through epi-illumination, in reflection, and, for comparison with conventional microscopy, 2-D images acquired via a standard TV camera coupled to the microscope itself. Our CLSM system is equipped with a laser beam at 488 and 514 nm and the data have been acquired with various steps of optical slicing, ranging from .04 to .25 micrometers. The volumes obtained by piling-up the slices are rendered through different techniques, some of them directly implemented on the workstation controlling the CLSM system, some of them on a SUN SPARC station 1, where the original data were transferred via an Ethernet link. In this last instance, original software has been developed for the visualization and animation of the 3-D structures, running under UNIX and X-Window, according to a ray-tracing algorithm.
Holographic imaging of the sub-micron world
Author(s):
Jeffrey H. Kulick;
Mel Ball;
Christine Gallerneault;
Julie Parker
Show Abstract
This paper describes how to produce synthetic holograms of sub-micron still-lifes using scanning electron microscope imagery. This includes a discussion of the problems encountered acquiring and processing imagery for this study such as creating artistically pleasing still-lifes and preparing specimens in the sub-micron domain. Also discussed is the preparation of specimens for long term exposure to SEM scanning beams. Additional problems encountered include lighting and illumination, illumination uniformity problems, and camera and subject motion. The paper also includes a discussion of the photographic and holographic processing of the holographic materials.
Reconstruction of surfaces from optical sections
Author(s):
Jill Gemmill;
Kenneth R. Sloan Jr.;
Christine A. Curcio
Show Abstract
We are working on several projects which have in common the reconstruction of smooth surfaces based on data gathered using optical sections. Data collection is done using a great many techniques -- most of which require manual tracing or labelling by a human trained observer. These include: camera lucida drawings or EM photomicrographs traced on a digitizing tablet, and contours traced directly on a frame-grabbed image. Optical sections are acquired using both confocal microscopy and Nomarski optics. Objects of interest range from individual cells (photoreceptors, ganglion cells), through small collections of cells, to extended regions. Some of the objects are fairly simple (e.g., individual cones in human retina, blob- or sheet-like regions of interest); the information is contained in precise measurements of the geometry. Other objects (e.g., dendritic trees) are more complicated, with most of the information contained in the shape of a branching structure. All of these types of data are processed into common representation which produces as its final product a smooth 3-D surface, with arbitrary branching structure. Common tasks such as filtering hand drawn features, automatic alignment of multiple 2-D sections to create a 3-D volume, surface reconstruction, display and analysis are supported.
Voxel image processing and analysis based on modeling of convex hulls
Author(s):
Aarne E. Rantala;
Ari M. Vepsalainen
Show Abstract
An image decomposition that utilizes coding of generalized convex hulls is used to process and analyze voxel (3-D) images. The generalized convex hulls are computed corresponding to some appropriate set of order statistic filters. Part of the process is parallelized for a transputer network.
Viewit: a software system for multi-dimensional biomedical image processing, analysis, and visualization
Author(s):
Clinton S. Potter;
Patrick J. Moran
Show Abstract
We have developed a general purpose multidimensional image processing, analysis, and visualization software system for biomedical applications. This system, called Viewit, is an interpreter for an image processing language based upon a stack calculator paradigm where each stack element is a multidimensional array. Over two hundred primary functions are available for general purpose multidimensional image processing applications including Fourier transforms, back-projection, n-dimensional neighborhood operations, volume rendering, and animation. Viewit is available for most Unix based computers and includes specialized code for Cray vector architectures and the Connection Machine. Distributed image display interfaces are available using several different protocols, including X Windows. Viewit can be downloaded through Internet network access to NCSA. Viewit has evolved over a three year period through collaboration with researchers in the fields of 3-D nuclear magnetic resonance (NMR) microscopy, confocal microscopy, and other biomedical imaging areas. It can be used as a flexible research tool both for the development of analysis methods and for production application processing. In this article we discuss some of the features of the system and report on application research results.
Confocal microscopy and 3-D distribution of dead cells in cryopreserved pancreatic islets
Author(s):
Fatima A. Merchant;
Shanti J. Aggarwal;
Kenneth R. Diller;
Keith A. Bartels;
Alan Conrad Bovik
Show Abstract
Our laboratory is involved in studies of changes in shape and size of biological specimens under osmotic stress at ambient and sub-zero temperatures. This paper describes confocal microscopy, image processing and analysis of 3-D distribution of cells in acridine orange/propidium iodide (AO/PI) fluorescent stained frozen-thawed islet of Langerhans. Isolated and cultured rat pancreatic islets were frozen and thawed in 2 M dimethylsulfoxide and examined under a Zeiss laser scanning confocal microscope. Two micrometers to five micrometers serial sections of the islets were obtained and processed to obtain high contrast images which were later processed in two steps. The first step consisted of the isolation of the region of interest by template masking followed by grey level thresholding to obtain a binary image. Three-dimensional blob coloring algorithm was applied and the number of voxels in each region and the number of regions were counted. The volumetric distribution of the dead cells in the islets was computed by calculating the distance from the center of each blob to the centroid of the 3-D image. An increase in the number of blobs moving from the center toward the periphery of the islet was observed indicating that the freeze damage was more concentrated in the outer edges of the islet.
PRIISM: an integrated system for display and analysis of 3D microscopeimages
Author(s):
Hans Chen;
Warren K. Clyborne;
John W. Sedat;
David A. Agard
Show Abstract
Recent advances in a number of technologies, including digital data acquisition and image processing, have caused a revolution in light and electron microscopy. One important result of this revolution has been that it is now possible to investigate directly the structure of cells and cell components in three dimensions. For some time our laboratory has been developing the hardware and software required for high resolution three-dimensional imaging, utilizing both light and electron microscopy. This development effort has been driven by our desire to understand the three-dimensional structure and organization of chromosomes within the cell nucleus. Due to the vast amount of information contained in the three-dimensional data sets (typically 10 - 50 Mbytes each) and the complexity of the biological samples, data visualization plays a critical role. Without powerful display and analysis tools, it would be essentially impossible to extract useful information. Toward this end, we have spent considerable effort in developing software to aid the biologist in comprehending his three-dimensional data. A key aspect of this has been the utilization of multiple display windows that readily allow the comparison of different data or simultaneous viewing of data from multiple directions (PRISM, Chen et al, 1990). With the advancement of our research project, we require a parallel evolution in the visualization methods used to examine and understand the volumetric data. Consequently, a project was begun one year ago to develop a new image visualization system which will meet the future data visualization and image analysis requirements of our research.
Aspects of confocal image analysis
Author(s):
Jagath K. Samarabandu;
Raj S. Acharya;
Ping Chin Cheng
Show Abstract
In this paper, we discuss various aspects of an image analysis system for images obtained from a laser scanning confocal light microscope with particular emphasis on segmentation techniques. The main areas of interest of the image analysis system are: pre-processing, segmentation, data structures used to represent objects, morphometry, and visualization of three-dimensional microscopic structures. Pre-processing techniques are discussed, both for reduction of spurious intensity variations using digital filters and for intensity correction to account for the loss of signal due to photo bleaching and due to the field curvature of the objective lens. The segmentation techniques discussed in this paper include data driven approaches such as region segmentation based on intensity variations as well as model driven approaches such as boundary refinement based on object models. The segmented image is then used to extract morphometric parameters of microscopic structures such as surface area, volume, center of gravity, eccentricity, and skeleton. Several algorithms are presented to extract this information and methods of presenting the results are also discussed. Finally, various methods and implementation aspects of visualizing both raw data and segmented images are evaluated together with techniques for interactive manipulation of the images.
Integrated package for interactive analysis and interpretation of nuclear medicine images
Author(s):
Augusto Ferreira da Silva;
Antonio Sousa Pereira;
M. F. Botelho;
J. J.P. de Lima
Show Abstract
This paper describes a software package based on a set of integrated tools intended to be used in nuclear medicine imaging environments. These tools, following a functionally consistent and open architecture, aim to provide an efficient and user-friendly way for handling the analysis and interpretation of nuclear medicine images in a broad range of applications. The Image, Graphics, and Colors tools are the basic building blocks. Besides basic image handling facilities, the Image tool was designed to accomplish both conventional and special purposed processing tasks. Among these, the interactive definition of organ shaped regions of interest, functional imaging (e.g., mean transit time images in ventilatory lung studies) and activity quantitation should be pointed out as the most intensively used facilities. The Graphics tool is used mainly to display and analyze the activity/time curves resulting from parametric related studies. As intensity color coding has gained wide acceptance in nuclear medicine it was thought convenient to implement a Colors tool intended to provide interactive intensity manipulation. The X Window graphics interface system is the basis for the implementation of this set of independent but intercommunicating tools which are intended to run on all UNIX workstations provided with, at least, an 8 bit depth frame buffer.
Visualizing 3D microscopic specimens
Author(s):
Per-Ola Forsgren;
Lars L. Majlof
Show Abstract
The confocal microscope can be used in a vast number of fields and applications to gather more information than is possible with a regular light microscope, in particular about depth. Compared to other three-dimensional imaging devices such as CAT, NMR, and PET, the variations of the objects studied are larger and not known from macroscopic dissections. It is therefore important to have several complementary ways of displaying the gathered information. We present a system where the user can choose display techniques such as extended focus, depth coding, solid surface modeling, maximum intensity and other techniques, some of which may be combined. A graphical user interface provides easy and direct control of all input parameters. Motion and stereo are available options. Many three- dimensional imaging devices give recordings where one dimension has different resolution and sampling than the other two which requires interpolation to obtain correct geometry. We have evaluated algorithms with interpolation in object space and in projection space. There are many ways to simplify the geometrical transformations to gain performance. We present results of some ways to simplify the calculations.
Real-time digital disk applications
Author(s):
Clyde A. Sapp;
David E. Pitts;
Saganti B. Premkumar;
A. Glen Houston
Show Abstract
Software and image processing techniques have been developed which make use of a real-time digital disk to capture video frames at video rates and which allow either the transfer of these data to standard speed disk drives or to conduct analysis directly from the real-time digital disk. This capability can be extremely useful in a number of applications which have their original data in a video format. An overview of this general capability, along with three specific application examples are presented here.
Lipid and protein distribution in epithelial cells assessed with confocal microscopy
Author(s):
Kajsa Holmgren Peterson;
Michael Randen;
Richard M. Hays;
Karl-Eric Magnusson
Show Abstract
Confocal laser scanning microscopy, image processing, and volume visualization were used to characterize the 3-D distribution of lectin receptors, lipid probes, and actin cytoskeleton in epithelial cells. Small intestine-like cells were grown on glass or filter supports and apically labelled with different fluorescent lipid and lectin probes. The restriction of the probes by the tight junctions was studied in living cells. Series of confocal x-y sections were transferred to an image processing system for analysis. The fluorescence intensity within a specified area of all x-y sections was plotted as a function of the vertical position of the sections. The curve inclination was used to describe the degree of restriction to the probes. It was found that lectins were more confined to the apical part than the lipids, which showed varying degree of redistribution to the basolateral membrane. Volume rendering, and specifically animated sequences with varying viewpoint and opacity mapping, were used to visualize the structure of actin cytoskeleton and distribution of lipid and lectin probes. In toad bladder epithelial cells, actin was labelled before and after treatment with the antidiuretic hormone vasopressin. The hormone-induced redistribution of actin in the apical and lateral portion of the cells was measured on x-z scanned images. Ratios of apical-to-lateral intensity were calculated. It was found that the decrease in the ratios after vasopressin treatment was around 30%. The decrease was due to loss of actin apically. This is supposed to facilitate apical fusion of vesicles containing the water-channel forming proteins, being important in water homeostasis.
Three-dimensional image analysis as a tool for embryology
Author(s):
Andre Verweij
Show Abstract
In the study of cell fate, cell lineage, and morphogenetic transformation it is necessary to obtain 3-D data. Serial sections of glutaraldehyde fixed and glycol methacrylate embedded material provide high resolution data. Clonal spread during germ layer formation in the mouse embryo has been followed by labeling a progenitor epiblast cell with horseradish peroxidase and staining its descendants one or two days later, followed by histological processing. Reconstruction of a 3-D image from histological sections must provide a solution for the alignment problem. As we want to study images at different magnification levels, we have chosen a method in which the sections are aligned under the microscope. Positioning is possible through a translation and a rotation stage. The first step for reconstruction is a coarse alignment on the basis of the moments in a binary, low magnification image of the embedding block. Thereafter, images of higher magnification levels are aligned by optimizing a similarity measure between the images. To analyze, first a global 3-D second order surface is fitted on the image to obtain the orientation of the embryo. The coefficients of this fit are used to normalize the size of the different embryos. Thereafter, the image is resampled with respect to the surface to create a 2-D mapping of the embryo and to guide the segmentation of the different cell layers which make up the embryo.
Analysis of three-dimensional images in quantitative microscopy
Author(s):
Steven S. S. Poon;
Rabab K. Ward;
Branko Palcic
Show Abstract
In bright field microscopy, quantitative analysis of acquired images is customarily performed using the `best' image. Since an image with sufficient detail and clarity is required for consistent classification and discrimination of objects in the image, the image with higher magnification is commonly chosen for the analysis, which generally corresponds to lower focal depths. The objectively determined `best' focus level, although optimal for the extraction of some features from the chosen objects, may not correspond to the best focal level for the extraction of some other features. To obtain tighter distribution of all features, we have been searching for a method which employs analysis of images acquired at different focal planes. In this work, we analyzed images of stained cervical cells using three different approaches. In the first approach, different features were extracted from images taken at different focal planes. In the second approach, we used simultaneously all the in-focus and out-of-focus information from the images to reconstruct the focussed images at various focal planes. In the third approach, the in-focus three-dimensional scene was compressed to two dimensions to simulate an image taken from a system with a very large depth of focus. The latter method reduced the data storage size and simplified subsequent scene analysis. The advantages and disadvantages of the above approaches are discussed.
New extended fan-beam reconstruction formula
Author(s):
Ge Wang;
T. H. Lin;
Ping Chin Cheng
Show Abstract
Because of its high imaging efficiency, fan-beam reconstruction approach has been very popular in recent years. However, fan-beam reconstruction formula had been restricted to a circular scanning locus until an extended fan-beam reconstruction formula was presented by D. B. Smith. Actually, non-circular scanning loci are not only academically interesting, but also practically significant. Smith's extended fan-beam reconstruction formula requires a scanning locus satisfying some nontrivial conditions, contains the derivative of the scanning locus, and in general a spatially variant convolution. In addition, Smith's formula must be specifically discretized for each kind of scanning loci. In this paper, we propose a new extended fan-beam reconstruction formula, which is obtained based on geometrical intuition and is then validated by a strict mathematical proof. The new fan-beam reconstruction formula is the same as the conventional equispatial one, except that the source-to-origin distance depends on the rotation angle, and thus gets rid of the derivative of a locus and always requires a spatially invariant convolution. Furthermore, due to its simple form, the new formula naturally results in a unified discrete version. The new formula requires that the source-to-origin distance be differentiable almost everywhere with respect to the rotation angle and be symmetric with respect to the origin of the reconstruction coordinate system. The numerical simulation results of the new fan-beam formula are also presented.
Preliminary error analysis of the general cone-beam reconstruction algorithm
Author(s):
Ge Wang;
T. H. Lin;
Ping Chin Cheng;
D. M. Shinozaki
Show Abstract
An x-ray microscope system for microtomography is under development at SUNY/Buffalo, New York. Considering the characteristics of the x-ray microscope system and the limitations of current cone-beam reconstruction algorithms, a general cone-beam image reconstruction algorithm has been developed at AMIL-ARTS. In order to study the reconstruction error characteristics of the general cone-beam algorithm, a preliminary error analysis on the algorithm is performed in this paper. The most important error source in cone-beam reconstruction is the theoretical precision limitation. Like many cone-beam reconstruction algorithms, the general cone-beam algorithm is not exact in nature. Thus, an analytic reconstruction error formula is derived which relates the error to the specimen structure and various imaging parameters. Approximately, the reconstruction error is proportional to either the distance from a voxel to the midplane or the pitch of a helix-like scanning locus, and inversely proportional to the size of the scanning locus. The reconstruction error also depends on the specimen structure. The faster the structure varies along the z direction, the larger the reconstruction error will be. Specimens are modeled as stochastic fields. Typical simulation results are then depicted and discussed.
Real-time 3D digitization system for speech research
Author(s):
Tim P. Monks;
John N. Carter;
C. H. Shadle
Show Abstract
Our goal is to provide speech researchers with millimetre accuracy measurements of lip and mouth shape as a function of time. These data are fundamentally important in helping to understand the mechanism of speech production. This paper describes a technique currently under development which is capable of making a full three-dimensional ( 3D) measurement every video frame. Our measurement technique consists of projecting a series of colour-coded stripes of light onto the subject, and measuring a full field of 3D data from the distortion in the pattern visible when viewed from a different angle. A low sampling-rate time-sequence analysis is shown for the mouth of a speaker pronouncing the word power.
Measurement of intracellular calcium gradients in single living cells using optical sectioning microscopy
Author(s):
Rao V. Yelamarty;
Joseph Y. Cheung
Show Abstract
Intracellular free calcium has been recognized as a regulator of many cellular processes and plays a key role in mediating actions of many drugs. To elucidate subcellular spatial calcium changes throughout the cell in three dimensions (3-D), optical sectioning microscopy was applied using digital imaging coupled fluorescence microscopy. The cell was loaded with a fluorescent indicator, fura-2, and a stack of sectional fluorescent images were acquired, digitized and finally stored on-line for post image analysis. Each sectional image was then deconvolved, to remove contaminating light signals from adjacent planes, using the Nearest Neighboring Deconvolution Algorithm (NNDA) and the overall imaging system's empirical Point Spread Function (PSF) that is measured with a 0.25 micrometers fluorescent bead. Using this technique, we measured that the addition of growth factors caused a 2 - 3 fold increase (1) in nuclear calcium compared to cytosolic calcium in blood cells and (2) in both nuclear and cytosolic calcium in liver cells. Such spatial information, which is important in understanding subcellular processes, would not be possible to measure with other methods.
Scanning confocal microscope for accurate dimensional measurement
Author(s):
Steven E. Mechels;
Matthew Young
Show Abstract
We have improved and evaluated a scanning confocal microscope for the precise measurement of optical fiber cladding diameter. In particular, we have studied the systematic error that results from a finite detector aperture and concluded that the diameter of that aperture must be less than one-half the radius of the Airy disk in the detector plane. We compared our measurements with a chrome-on-glass standard reference material provided by NIST- Gaithersburg and with optical fibers that were measured with a contact micrometer. We estimate the overall uncertainty of our measurements to be around +/- 50 nm.