Constrained non-rigid registration for whole body image registration: method and validation
Author(s):
Xia Li;
Thomas E. Yankeelov;
Todd E. Peterson;
John C. Gore;
Benoit M. Dawant
Show Abstract
3D intra- and inter-subject registration of image volumes is important for tasks that include measurements and
quantification of temporal/longitudinal changes, atlas-based segmentation, deriving population averages, or voxel and
tensor-based morphometry. A number of methods have been proposed to tackle this problem but few of them have
focused on the problem of registering whole body image volumes acquired either from humans or small animals. These
image volumes typically contain a large number of articulated structures, which makes registration more difficult than
the registration of head images, to which the vast majority of registration algorithms have been applied. To solve this
problem, we have previously proposed an approach, which initializes an intensity-based non-rigid registration algorithm
with a point based registration technique [1, 2]. In this paper, we introduce new constraints into our non-rigid registration
algorithm to prevent the bones from being deformed inaccurately. Results we have obtained show that the new
constrained algorithm leads to better registration results than the previous one.
Large deformation registration of contrast-enhanced images with volume-preserving constraint
Author(s):
Kinda Anna Saddi;
Christophe Chefd'hotel;
Farida Cheriet
Show Abstract
We propose a registration method for the alignment of contrast-enhanced CT liver images. It consists of a fluid-based registration algorithm designed to incorporate a volume-preserving constraint. More specifically our objective is to recover an accurate non-rigid transformation in a perfusion study in presence of contrast-enhanced structures which preserves the incompressibility of liver tissues. This transformation is obtained by integrating a smooth divergence-free vector field derived from the gradient of a statistical similarity measure. This gradient is regularized with a fast recursive low-pass filter and is projected onto the space of divergence-free vector fields using a multigrid solver. Both 2D and 3D versions of the algorithm have been implemented. Simulations and experiments show that our approach improves the registration capture range, enforces the imcompressibility constraint with a good level of accuracy, and is computationally efficient. On perfusion studies, this method prevents the shrinkage of contrast-enhanced regions typically observed with standard fluid methods.
Spline-based elastic image registration using solutions of the Navier equation
Author(s):
Stefan Wörz;
Karl Rohr
Show Abstract
We introduce a new hybrid approach for spline-based elastic image registration using both landmarks and intensity
information. As underlying deformation model we use Gaussian elastic body splines (GEBS), which
are solutions of the Navier equation of linear elasticity under Gaussian forces. We also incorporate landmark
localization uncertainties, which are characterized by weight matrices representing anisotropic errors. To combine
landmarks and intensity information, we formulate an energy-minimizing functional that simultaneously
minimizes w.r.t. both the landmark and intensity information. The resulting functional can be efficiently minimized
using the method of Levenberg/Marquardt. Since the approach is based on a physical deformation model,
cross-effects in elastic deformations can be taken into account. We demonstrate the applicability of our scheme
based on 3D synthetic images, 2D MR images of the brain, as well as 2D gel electrophoresis images. It turns out
that the new scheme achieves more accurate results compared to a pure landmark-based approach.
A field map guided approach to non-rigid registration of brain EPI to structural MRI
Author(s):
Ali Gholipour;
Nasser Kehtarnavaz;
Richard W. Briggs;
Kaundinya S. Gopinath
Show Abstract
It is known that along the phase encoding direction the effect of magnetic field inhomogeneity causes significant spatial
distortions in fast functional MRI Echo Planar Imaging (EPI). In this work, our previously developed distortion
correction technique via a non-rigid registration of EPI to anatomical MRI is improved by adding information from field
maps to achieve a more accurate and efficient registration. Local deformation models are used in regions of distortion
artifacts instead of using a global non-rigid transformation. The use of local deformations not only enhances the
efficiency of the non-rigid registration by reducing the number of deformation model parameters, but also provides
constraints to avoid physically incorrect deformations in undistorted regions. The accuracy and reliability of the non-rigid
registration technique is improved by using an additional high-resolution gradient echo EPI scan. In-vivo validation
is performed by comparing the similarity of the low-resolution EPI to various structural MRI scans before and after
applying the computed deformation models. Visual inspection of the images, as well as Mutual Information (MI) and
Normalized Cross Correlation (NCC) comparisons, reveal improvements within the sub-voxel range in the moderately
distorted areas but not in the signal loss regions.
Evaluation of a new optimisation algorithm for rigid registration of MRI data
Author(s):
Nicolas Wiest-Daesslé;
Pierre Yger;
Sylvain Prima;
Christian Barillot
Show Abstract
We propose to use a recently introduced optimisation method in the context of rigid registration of medical
images. This optimisation method, introduced by Powell and called NEWUOA, is compared with two other
widely used algorithms: Powell's direction set and Nelder-Mead's downhill simplex method. This paper performs
a comparative evaluation of the performances of these algorithms to optimise different image similarity measures
for different mono- and multi-modal registrations. Images from the BrainWeb project are used as a gold standard
for validation purposes. This paper exhibits that the proposed optimisation algorithm is more robust, more
accurate and faster than the two other methods.
Level set motion assisted non-rigid 3D image registration
Author(s):
Deshan Yang;
Joseph O. Deasy;
Daniel A. Low;
Issam El Naqa
Show Abstract
Medical imaging applications of rigid and non-rigid elastic deformable image registration are undergoing wide scale
development. Our approach determines image deformation maps through a hierarchical process, from global to local
scales. Vemuri (2000) reported a registration method, based on levelset evolution theory, to morph an image along the
motion gradient until it deforms to the reference image. We have applied this level set motion method as basis to
iteratively compute the incremental motion fields and then we approximated the field using a higher-level affine and
non-rigid motion model. In such a way, we combine sequentially the global affine motion, local affine motion and local
non-rigid motion. Our method is fully automated, computationally efficient, and is able to detect large deformations if
used together with multi-grid approaches, potentially yielding greater registration accuracy.
Robust initialization for 2D/3D registration of knee implant models to single-plane fluoroscopy
Author(s):
J. Hermans;
P. Claes;
J. Bellemans;
D. Vandermeulen;
P. Suetens
Show Abstract
A fully automated initialization method is proposed for the 2D/3D registration of 3D CAD models of knee
implant components to a single-plane calibrated fluoroscopy. The algorithm matches edge segments, detected
in the fluoroscopy image, with pre-computed libraries of expected 2D silhouettes of the implant components.
Each library entry represents a different combination of out-of-plane registration transformation parameters.
Library matching is performed by computing point-based 2D/2D registrations in between each library entry
and each detected edge segment in the fluoroscopy image, resulting in an estimate of the in-plane registration
transformation parameters. Point correspondences for registration are established by template matching of the
bending patterns on the contours. A matching score for each individual 2D/2D registration is computed by
evaluating the transformed library entry in an edge-encoded (characteristic) image, which is derived from the
original fluoroscopy image. A matching scores accumulator is introduced to select and suggest one or more initial
pose estimates. The proposed method is robust against occlusions and partial segmentations. Validation results
are shown on simulated fluoroscopy images. In all cases a library match is found for each implant component
which is very similar to the shape information in the fluoroscopy. The feasibility of the proposed method is
demonstrated by initializing an intensity-based 2D/3D registration method with the automatically obtained
estimation of the registration transformation parameters.
Registration of 2D to 3D joint images using phase-based mutual information
Author(s):
Rupin Dalvi;
Rafeef Abugharbieh;
Mark Pickering;
Jennie Scarvell;
Paul Smith
Show Abstract
Registration of two dimensional to three dimensional orthopaedic medical image data has important applications
particularly in the area of image guided surgery and sports medicine. Fluoroscopy to computer tomography (CT)
registration is an important case, wherein digitally reconstructed radiographs derived from the CT data are registered to
the fluoroscopy data. Traditional registration metrics such as intensity-based mutual information (MI) typically work
well but often suffer from gross misregistration errors when the image to be registered contains a partial view of the
anatomy visible in the target image. Phase-based MI provides a robust alternative similarity measure which, in addition
to possessing the general robustness and noise immunity that MI provides, also employs local phase information in the
registration process which makes it less susceptible to the aforementioned errors. In this paper, we propose using the
complex wavelet transform for computing image phase information and incorporating that into a phase-based MI
measure for image registration. Tests on a CT volume and 6 fluoroscopy images of the knee are presented. The femur
and the tibia in the CT volume were individually registered to the fluoroscopy images using intensity-based MI,
gradient-based MI and phase-based MI. Errors in the coordinates of fiducials present in the bone structures were used to
assess the accuracy of the different registration schemes. Quantitative results demonstrate that the performance of
intensity-based MI was the worst. Gradient-based MI performed slightly better, while phase-based MI results were the
best consistently producing the lowest errors.
Automatic intensity-based 3D-to-2D registration of CT volume and dual-energy digital radiography for the detection of cardiac calcification
Author(s):
Xiang Chen;
Robert Gilkeson M.D.;
Baowei Fei
Show Abstract
We are investigating three-dimensional (3D) to two-dimensional (2D) registration methods for computed tomography
(CT) and dual-energy digital radiography (DR) for the detection of coronary artery calcification. CT is an established
tool for the diagnosis of coronary artery diseases (CADs). Dual-energy digital radiography could be a cost-effective
alternative for screening coronary artery calcification. In order to utilize CT as the "gold standard" to evaluate the
ability of DR images for the detection and localization of calcium, we developed an automatic intensity-based 3D-to-2D
registration method for 3D CT volumes and 2D DR images. To generate digital rendering radiographs (DRR) from the
CT volumes, we developed three projection methods, i.e. Gaussian-weighted projection, threshold-based projection, and
average-based projection. We tested normalized cross correlation (NCC) and normalized mutual information (NMI) as
similarity measurement. We used the Downhill Simplex method as the search strategy. Simulated projection images
from CT were fused with the corresponding DR images to evaluate the localization of cardiac calcification. The
registration method was evaluated by digital phantoms, physical phantoms, and clinical data sets. The results from the
digital phantoms show that the success rate is 100% with mean errors of less 0.8 mm and 0.2 degree for both NCC and
NMI. The registration accuracy of the physical phantoms is 0.34 ± 0.27 mm. Color overlay and 3D visualization of the
clinical data show that the two images are registered well. This is consistent with the improvement of the NMI values
from 0.20 ± 0.03 to 0.25 ± 0.03 after registration. The automatic 3D-to-2D registration method is accurate and robust and
may provide a useful tool to evaluate the dual-energy DR images for the detection of coronary artery calcification.
Projection-slice theorem based 2D-3D registration
Author(s):
M. J. van der Bom;
J. P. W. Pluim;
R. Homan;
J. Timmer;
L. W. Bartels
Show Abstract
In X-ray guided procedures, the surgeon or interventionalist is dependent on his or her knowledge of the patient's
specific anatomy and the projection images acquired during the procedure by a rotational X-ray source.
Unfortunately, these X-ray projections fail to give information on the patient's anatomy in the dimension along
the projection axis. It would be very profitable to provide the surgeon or interventionalist with a 3D insight of
the patient's anatomy that is directly linked to the X-ray images acquired during the procedure.
In this paper we present a new robust 2D-3D registration method based on the Projection-Slice Theorem. This
theorem gives us a relation between the pre-operative 3D data set and the interventional projection images.
Registration is performed by minimizing a translation invariant similarity measure that is applied to the Fourier
transforms of the images. The method was tested by performing multiple exhaustive searches on phantom data
of the Circle of Willis and on a post-mortem human skull. Validation was performed visually by comparing the
test projections to the ones that corresponded to the minimal value of the similarity measure.
The Projection-Slice Theorem Based method was shown to be very effective and robust, and provides capture
ranges up to 62 degrees. Experiments have shown that the method is capable of retrieving similar results when
translations are applied to the projection images.
Automatic detection of abrupt patient motion in SPECT data acquisition
Author(s):
Elisabeth Röhl;
Hanno Schumacher;
Bernd Fischer
Show Abstract
Due to the long imaging times in SPECT, patient motion is inevitable and constitutes a serious problem for any
reconstruction algorithm. The measured inconsistent projection data leads to reconstruction artefacts which can
significantly affect the diagnostic accuracy of SPECT, if not corrected. Among the most promising attempts
for addressing this cause of artefacts, is the so-called data-driven motion correction methodology. To use this
approach it is necessary to automatically detect patient motion and to subdivide the acquired data in projection
sets accordingly. In this note, we propose three different schemes for automatically detecting patient motion. All
methods were tested on 3D academic examples with different rigid motions, motion times, and camera systems.
On the whole, every method was tested with approximately 400 to 600 test cases. One of the proposed new
methods does show promising results.
Nonrigid registration of dynamic breast F-18-FDG PET/CT images using deformable FEM model and CT image warping
Author(s):
Alphonso Magri;
Andrzej Krol;
Mehmet Unlu;
Edward Lipson;
James Mandel;
Wendy McGraw;
Wei Lee;
Ioana Coman;
David Feiglin
Show Abstract
This study was undertaken to correct for motion artifacts in dynamic breast F-18-FDG PET/CT images, to improve
differential-image quality, and to increase accuracy of time-activity curves. Dynamic PET studies, with subjects prone,
and breast suspended freely employed a protocol with 50 frames, each 1-minute long. A 30 s long CT scan was acquired
immediately before the first PET frame. F-18-FDG was administered during the first PET time frame. Fiducial skin
markers (FSMs) each containing ~0.5 &mgr;Ci of Ge-68 were taped to each breast. In our PET/PET registration method we
utilized CT data. For corresponding FSMs visible on the 1st and nth frames, the geometrical centroids of FSMs were
found and their displacement vectors were estimated and used to deform the finite element method (FEM) mesh of the
CT image (registered with 1st PET frame) to match the consecutive dynamic PET time frames. Each mesh was then
deformed to match the 1st PET frame using known FSM displacement vectors as FEM loads, and the warped PET timeframe
volume was created. All PET time frames were thus nonrigidly registered with the first frame. An analogy
between orthogonal components of the displacement field and the temperature distribution in steady-state heat transfer in
solids is used, via standard heat-conduction FEM software with "conductivity" of surface elements set arbitrarily
significantly higher than that of volume elements. Consequently, the surface reaches steady state before the volume. This
prevents creation of concentrated FEM loads at the locations of FSMs and reaching incorrect FEM solution. We observe
improved similarity between the 1st and nth frames. The contrast and the spatial definition of metabolically hyperactive
regions are superior in the registered 3D images compared to unregistered 3D images. Additional work is needed to
eliminate small image artifacts due to FSMs.
Indirect PET-PET image registration to monitor lung cancer tumor
Author(s):
Z. Ouksili;
C. Tauber;
J. Nalis;
H. Batatia;
O. Caselles;
F. Courbon
Show Abstract
This paper deals with registering 3D PET images in order to monitor lung tumor evolution. Registering directly
two PET images, taken at different stages of a cancer therapy, leads to deforming the tumor of the moving image
to take the shape of the fixed image, loosing the tumor evolution information. This results in aberrant medical
diagnosis. In order to solve this problem, we propose an indirect registration method that consists of processing
pairs of CT-PET images. The CT images acquired at each stage are first registered to estimate anatomical
transformations. The free-form deformation obtained is then applied to the corresponding PET images. The
reconstructed PET images can be compared and used to monitor the tumor. The volume ratio and radiation
density are calculated to assess the evolution of the tumor and evaluate the effectiveness of a therapy. The
proposed iconic registration method is based on a B-Spline deformable model and mutual information. Two
approaches have been used to validate the proposed method. First, we used phantoms to simulate the evolution
of a tumor. The second approach consisted of simulating a tumor within real images. Quantitative measures
show that our registration method keeps invariant volume and density distribution ratios of the tumor within
PET images. This leads to improved tumor localisation and better evaluation of the efficiency of therapies.
Ischemic segment detection using the support vector domain description
Author(s):
Michael S. Hansen;
Hildur Ólafsdóttir;
Karl Sjöstrand;
Søren G. Erbou;
Mikkel B. Stegmann;
Henrik B. W. Larsson;
Rasmus Larsen
Show Abstract
Myocardial perfusion Magnetic Resonance (MR) imaging has proven to be a powerful method to assess coronary artery diseases. The current work presents a novel approach to the analysis of registered sequences of myocardial perfusion MR images. A previously reported active appearance model (AAM) based segmentation and registration of the myocardium provided pixel-wise signal intensity curves that were analyzed using the Support Vector Domain Description (SVDD). In contrast to normal SVDD, the entire regularization path was calculated and used to calculate a generalized distance, which is used to discriminate between ischemic and healthy tissue. The results corresponded well to the ischemic segments found by assessment of the three common perfusion parameters; maximum upslope, peak and time-to-peak obtained pixel-wise.
Automatic whole heart segmentation in CT images: method and validation
Author(s):
Olivier Ecabert;
Jochen Peters;
Matthew J. Walker;
Jens von Berg;
Cristian Lorenz;
Mani Vembar;
Mark E. Olszewski;
Jürgen Weese
Show Abstract
Deformable models have already been successfully applied to the semi-automatic segmentation of organs from
medical images. We present an approach which enables the fully automatic segmentation of the heart from multi-slice
computed tomography images. Compared to other approaches, we address the complete segmentation chain
comprising both model initialization and adaptation.
A multi-compartment mesh describing both atria, both ventricles, the myocardium around the left ventricle
and the trunks of the great vessels is adapted to an image volume. The adaptation is performed in a coarse-to-
fine manner by progressively relaxing constraints on the degrees of freedom of the allowed deformations. First,
the mesh is translated to a rough estimate of the heart's center of mass. Then, the mesh is deformed under the
action of image forces. We first constrain the space of deformations to parametric transformations, compensating
for global misalignment of the model chambers. Finally, a deformable adaptation is performed to account for
more local and subtle variations of the patient's anatomy.
The whole heart segmentation was quantitatively evaluated on 25 volume images and qualitatively validated
on 42 clinical cases. Our approach was found to work fully automatically in 90% of cases with a mean surface-
to-surface error clearly below 1.0 mm. Qualitatively, expert reviewers rated the overall segmentation quality as
4.2±0.7 on a 5-point scale.
Discriminative boundary detection for model-based heart segmentation in CT images
Author(s):
Jochen Peters;
Olivier Ecabert;
Hauke Schramm;
Jürgen Weese
Show Abstract
Segmentation of organs in medical images can be successfully performed with deformable models. Most approaches
combine a boundary detection step with some smoothness or shape constraint. An objective function
for the model deformation is thus established from two terms: the first one attracts the surface model to the
detected boundaries while the second one keeps the surface smooth or close to expected shapes.
In this work, we assign locally varying boundary detection functions to all parts of the surface model. These
functions combine an edge detector with local image analysis in order to accept or reject possible edge candidates.
The goal is to optimize the discrimination between the wanted and misleading boundaries. We present a method
to automatically learn from a representative set of 3D training images which features are optimal at each position
of the surface model. The basic idea is to simulate the boundary detection for the given 3D images and to select
those features that minimize the distance between the detected position and the desired object boundary.
The approach is experimentally evaluated for the complex task of full-heart segmentation in CT images. A
cyclic cross-evaluation on 25 cardiac CT images shows that the optimized feature training and selection enables
robust, fully automatic heart segmentation with a mean error well below 1 mm. Comparing this approach to
simpler training schemes that use the same basic formalism to accept or reject edges shows the importance of
the discriminative optimization.
Automatic segmentation of the internal carotid arteries through the skull base
Author(s):
Rashindra Manniesing;
Wiro J. Niessen
Show Abstract
An automatic method is presented to segment the internal carotid arteries through the difficult part of the skull
base in CT angiography. The method uses the entropy per slice to select a cross sectional plane below the skull
base. In this plane 2D circular structures are detected by the Hough transform. The center points are used to
initialize a level set which evolves with a prior shape constraint on its topology. In contrast with some related
vessel segmentation methods, our approach does not require the acquisition of an additional CT scan for bone
masking. Experiments on twenty internal carotids in ten patients show that 19 seed points are correctly identified
(95%) and 18 carotids (90%) are successfully segmented without any human interaction.
Method for extracting the aorta from 3D CT images
Author(s):
Pinyo Taeprasartsit;
William E. Higgins
Show Abstract
Bronchoscopic biopsy of the central-chest lymph nodes is vital in the staging of lung cancer. Three-dimensional
multi-detector CT (MDCT) images provide vivid anatomical detail for planning bronchoscopy. Unfortunately,
many lymph nodes are situated close to the aorta, and an inadvertent needle biopsy could puncture the aorta,
causing serious harm. As an eventual aid for more complete planning of lymph-node biopsy, it is important to
define the aorta. This paper proposes a method for extracting the aorta from a 3D MDCT chest image. The
method has two main phases: (1) Off-line Model Construction, which provides a set of training cases for fitting
new images, and (2) On-Line Aorta Construction, which is used for new incoming 3D MDCT images. Off-Line
Model Construction is done once using several representative human MDCT images and consists of the following
steps: construct a likelihood image, select control points of the medial axis of the aortic arch, and recompute
the control points to obtain a constant-interval medial-axis model. On-Line Aorta Construction consists of the
following operations: construct a likelihood image, perform global fitting of the precomputed models to the
current case's likelihood image to find the best fitting model, perform local fitting to adjust the medial axis
to local data variations, and employ a region recovery method to arrive at the complete constructed 3D aorta.
The region recovery method consists of two steps: model-based and region-growing steps. This region growing
method can recover regions outside the model coverage and non-circular tube structures. In our experiments,
we used three models and achieved satisfactory results on twelve of thirteen test cases.
Asymmetric bias in user guided segmentations of brain structures
Author(s):
Martin Styner;
Rachel G. Smith;
Michael M. Graves;
Matthew W. Mosconi;
Sarah Peterson;
Scott White;
Joe Blocher;
Mohammed El-Sayed;
Heather C. Hazlett
Show Abstract
Brain morphometric studies often incorporate comparative asymmetry analyses of left and right hemispheric
brain structures. In this work we show evidence that common methods of user guided structural segmentation
exhibit strong left-right asymmetric biases and thus fundamentally influence any left-right asymmetry analyses.
We studied several structural segmentation methods with varying degree of user interaction from pure manual
outlining to nearly fully automatic procedures. The methods were applied to MR images and their corresponding
left-right mirrored images from an adult and a pediatric study. Several expert raters performed the segmentations
of all structures. The asymmetric segmentation bias is assessed by comparing the left-right volumetric asymmetry
in the original and mirrored datasets, as well as by testing each sides volumetric differences to a zero mean
standard t-tests.
The structural segmentations of caudate, putamen, globus pallidus, amygdala and hippocampus showed a
highly significant asymmetric bias using methods with considerable manual outlining or landmark placement.
Only the lateral ventricle segmentation revealed no asymmetric bias due to the high degree of automation and a
high intensity contrast on its boundary. Our segmentation methods have been adapted in that they are applied
to only one of the hemispheres in an image and its left-right mirrored image. Our work suggests that existing
studies of hemispheric asymmetry without similar precautions should be interpreted in a new, skeptical light.
Evidence of an asymmetric segmentation bias is novel and unknown to the imaging community. This result
seems less surprising to the visual perception community and its likely cause is differences in perception of
oppositely curved 3D structures.
Early detection of AD using cortical thickness measurements
Author(s):
M. Spjuth;
F. Gravesen;
S. F. Eskildsen;
L. R. Østergaard
Show Abstract
Alzheimer's disease (AD) is a neurodegenerative disorder that causes cortical atrophy and impaired cognitive
functions. The diagnosis is difficult to make and is often made over a longer period of time using a combination of
neuropsychological tests, and structural and functional imaging. Due to the impact of early intervention the
challenge of distinguishing early AD from normal ageing has received increasing attention. This study uses cortical
thickness measurements to characterize the atrophy in nine mild AD patients (mean MMSE-score 23.3 (std: 2.6))
compared to five healthy middle-aged subjects. A fully automated method based on deformable models is used for
delineation of the inner and outer boundaries of the cerebral cortex from Magnetic Resonance Images. This allows
observer independent high-resolution quantification of the cortical thickness. The cortex analysis facilitates
detection of alterations throughout the entire cortical mantle. To perform inter-subject thickness comparison in
which the spatial information is retained, a feature-based registration algorithm is developed which uses local
cortical curvature, normal vector, and a distance measure. A comparison of the two study groups reveals that the
lateral side of the hemispheres shows diffuse thinner areas in the mild AD group but especially the medial side
shows a pronounced thinner area which can be explained by early limbic changes in AD. For classification principal
component analysis is applied to reduce the high number of thickness measurements (>200,000) into fewer features.
All mild AD and healthy middle-aged subjects are classified correctly (sensitivity and specificity 100%).
Expectation maximization classification and Laplacian based thickness measurement for cerebral cortex thickness estimation
Author(s):
Mark Holden;
Rafael Moreno-Vallecillo;
Anthony Harris M.D.;
Lavier J. Gomes;
Than-Mei Diep;
Pierrick T. Bourgeat;
Sébastien Ourselin
Show Abstract
We describe a new framework for measuring cortical thickness from MR human brain images. This involves the
integration of a method of tissue classification with one to estimate thickness in 3D. We have determined an additional
boundary detection step to facilitate this. The classification stage utlizes the Expectation Maximisation
(EM) algorithm to classify voxels associated with the tissue types that interface with cortical grey matter (GM,
WM and CSF). This uses a Gaussian mixture and the EM algorithm to estimate the position and and width
of the Gaussians that model the intensity distributions of the GM, WM and CSF tissue classes. The boundary
detection stage uses the GM, WM and CSF classifications and finds connected components, fills holes and then
applies a geodesic distance transform to determine the GM/WM interface. Finally the thickness of the cortical
grey matter is estimated by solving Laplace's equation and determining the streamlines that connect the inner
and outer boundaries. The contribution of this work is the adaptation of the classification and thickness measurement
steps, neither requiring manual initialisation, and also the validation strategy. The resultant algorithm
is fully automatic and avoids the computational expense associated with preserving the cortical surface topology.
We have devised a validation strategy that indicates the cortical segmentation of a gold standard brain atlas
has a similarity index of 0.91, thickness estimation has subvoxel accuracy evaluated using a synthetic image and
precision of the combined segmentation and thickness measurement of 1.54mm using three clinical images.
Comparing 3D Gyrification Index and area-independent curvature-based measures in quantifying neonatal brain folding
Author(s):
Claudia E. Rodriguez-Carranza;
P. Mukherjee;
Daniel Vigneron;
James Barkovich;
Colin Studholme
Show Abstract
In this work we compare 3D Gyrification Index and our recently proposed area-independent curvature-based
surface measures [26] for the in-vivo quantification of brain surface folding in clinically acquired neonatal MR
image data. A meaningful comparison of gyrification across brains of different sizes and their subregions will only
be possible through the quantification of folding with measures that are independent of the area of the region of
analysis. This work uses a 3D implementation of the classical Gyrification Index, a 2D measure that quantifies
folding based on the ratio of the inner and outer contours of the brain and which has been used to study gyral
patterns in adults with schizophrenia, among other conditions. The new surface curvature-based measures and
the 3D Gyrification Index were calculated on twelve premature infants (age 28-37 weeks) from which surfaces of
cerebrospinal fluid/gray matter (CSF/GM) interface and gray matter/white matter (GM/WM) interface were
extracted. Experimental results show that our measures better quantify folding on the CSF/GM interface than
Gyrification Index, and perform similarly on the GM/WM interface.
Quantifying brain development in early childhood using segmentation and registration
Author(s):
P. Aljabar;
K. K. Bhatia;
M. Murgasova;
J. V. Hajnal;
J. P. Boardman;
L. Srinivasan;
M. A. Rutherford;
L. E. Dyet;
A. D. Edwards M.D.;
D. Rueckert
Show Abstract
In this work we obtain estimates of tissue growth using longitudinal data comprising MR brain images
of 25 preterm children scanned at one and two years. The growth estimates are obtained using segmentation
and registration based methods. The segmentation approach used an expectation maximisation
(EM) method to classify tissue types and the registration approach used tensor based morphometry
(TBM) applied to a free form deformation (FFD) model. The two methods show very good agreement
indicating that the registration and segmentation approaches can be used interchangeably. The advantage
of the registration based method, however, is that it can provide more local estimates of tissue
growth. This is the first longitudinal study of growth in early childhood, previous longitudinal studies
have focused on later periods during childhood.
Automatic brain cropping and atlas slice matching using a PCNN and a generalized invariant Hough transform
Author(s):
M. M. Swathanthira Kumar;
John M. Sullivan Jr.
Show Abstract
Medical research is dominated by animal models, especially rats and mice. Within a species most laboratory subjects
exhibit little variation in brain anatomy. This uniformity of features is used to crop regions of interest based upon a
known, cropped brain atlas. For any study involving N subjects, image registration or alignment to an atlas is required to
construct a composite result. A highly resolved stack of T2 weighted MRI anatomy images of a Sprague-Dawley rat was
registered and cropped to a known segmented atlas. This registered MRI volume was used as the reference atlas. A Pulse
Coupled Neural Network (PCNN) was used to separate brain tissue from surrounding structures, such as cranium and
muscle. Each iteration of the PCNN produces binary images of increasing area as the intensity spectrum is increased. A
rapid filtering algorithm is applied that breaks narrow passages connecting larger segmented areas. A Generalized
Invariant Hough Transform is applied subsequently to each PCNN segmented area to identify which segmented
reference slice it matches. This process is repeated for multiple slices within each subject. Since we have apriori
knowledge of the image ordering and fields of view this information provides initial estimates for subsequent
registration codes. This process of subject slice extraction to PCNN mask creations and GIHT matching with known
atlas locations is fully automatic.
Texture classification of normal tissues in computed tomography using Gabor filters
Author(s):
Lucia Dettori;
Alia Bashir;
Julie Hasemann
Show Abstract
The research presented in this article is aimed at developing an automated imaging system for classification of normal
tissues in medical images obtained from Computed Tomography (CT) scans. Texture features based on a bank of Gabor
filters are used to classify the following tissues of interests: liver, spleen, kidney, aorta, trabecular bone, lung, muscle, IP
fat, and SQ fat. The approach consists of three steps: convolution of the regions of interest with a bank of 32 Gabor
filters (4 frequencies and 8 orientations), extraction of two Gabor texture features per filter (mean and standard
deviation), and creation of a Classification and Regression Tree-based classifier that automatically identifies the various
tissues. The data set used consists of approximately 1000 DIACOM images from normal chest and abdominal CT scans
of five patients. The regions of interest were labeled by expert radiologists. Optimal trees were generated using two
techniques: 10-fold cross-validation and splitting of the data set into a training and a testing set. In both cases, perfect
classification rules were obtained provided enough images were available for training (~65%). All performance
measures (sensitivity, specificity, precision, and accuracy) for all regions of interest were at 100%. This significantly
improves previous results that used Wavelet, Ridgelet, and Curvelet texture features, yielding accuracy values in the
85%-98% range The Gabor filters' ability to isolate features at different frequencies and orientations allows for a multi-resolution
analysis of texture essential when dealing with, at times, very subtle differences in the texture of tissues in CT
scans.
Solid component evaluation in mixed ground glass nodules
Author(s):
Benjamin L. Odry;
Jing Huo;
Li Zhang;
Carol L. Novak;
David P. Naidich M.D.
Show Abstract
Multi-Slice Computed Tomography (MSCT) imaging of the lungs allow for detection and follow-up of very small
lesions including solid and ground glass nodules (GGNs). However relatively few computer-based methods have been
implemented for GGN segmentation. GGNs can be divided into pure GGNs and mixed GGNs, which contain both nonsolid
and solid components (SC). This latter category is especially of interest since some studies indicate a higher
likelihood of malignancy in GGNs with SC. Due to their characteristically slow growth rate, GGNs are typically
monitored with multiple follow-up scans, making measurement of the volume of both solid and non-solid component
especially desirable. We have developed an automated method to estimate the SC percentage within a segmented GGN.
First, the SC algorithm uses a novel method to segment out the solid structures, while excluding any vessels passing near
or through the nodule. A gradient distribution analysis around solid structures validates the presence or absence of SC.
We tested 50 GGNs, split between three groups: 15 GGNs with SC, 15 GGNs with a solid nodule added to simulate SC,
and 20 GGNs without SC. With three defined satisfaction levels for the segmentation (A: succeed, B: acceptable, C:
failed), the first group resulted in 60% with score A, 40% with score B, 0% with score C. The second group resulted in
66.7% with score A and 33.3% with score B. In testing the first and 3rd groups, the algorithm correctly detected SC in
all cases where it was present (sensitivity of 100%) and correctly determined absence of SC in 15 out of 20 cases
(specificity 75%).
Semantics and image content integration for pulmonary nodule interpretation in thoracic computed tomography
Author(s):
Daniela S. Raicu;
Ekarin Varutbangkul;
Janie G. Cisneros;
Jacob D. Furst;
David S. Channin;
Samuel G. Armato III
Show Abstract
Useful diagnosis of lung lesions in computed tomography (CT) depends on many factors including the ability of
radiologists to detect and correctly interpret the lesions. Computer-aided Diagnosis (CAD) systems can be used to
increase the accuracy of radiologists in this task. CAD systems are, however, trained against ground truth and the
mechanisms employed by the CAD algorithms may be distinctly different from the visual perception and analysis tasks
of the radiologist. In this paper, we present a framework for finding the mappings between human descriptions and
characteristics and computed image features. The data in our study were generated from 29 thoracic CT scans collected
by the Lung Image Database Consortium (LIDC). Every case was annotated by up to 4 radiologists by marking the
contour of nodules and assigning nine semantic terms to each identified nodule; fifty-nine image features were extracted
from each segmented nodule. Correlation analysis and stepwise multiple regression were applied to find correlations
among semantic characteristics and image features and to generate prediction models for each characteristic based on
image features. From our preliminary experimental results, we found high correlations between different semantic terms
(margin, texture), and promising mappings from image features to certain semantic terms (texture, lobulation,
spiculation, malignancy). While the framework is presented with respect to the interpretation of pulmonary nodules in
CT images, it can be easily extended to find mappings for other modalities in other anatomical structures and for other
image features.
Multiscale shape features for classification of bronchovascular anatomy in CT using AdaBoost
Author(s):
Robert A. Ochs;
Jonathan G. Goldin;
Fereidoun Abtin;
Hyun J. Kim;
Kathleen Brown;
Poonam Batra;
Donald Roback;
Michael F. McNitt-Gray;
Matthew S. Brown
Show Abstract
Lung CAD systems require the ability to classify a variety of pulmonary structures as part of the diagnostic process.
The purpose of this work was to develop a methodology for fully automated voxel-by-voxel classification of
airways, fissures, nodules, and vessels from chest CT images using a single feature set and classification method.
Twenty-nine thin section CT scans were obtained from the Lung Image Database Consortium (LIDC). Multiple
radiologists labeled voxels corresponding to the following structures: airways (trachea to 6th generation), major and
minor lobar fissures, nodules, and vessels (hilum to peripheral), and normal lung parenchyma. The labeled data was
used in conjunction with a supervised machine learning approach (AdaBoost) to train a set of ensemble classifiers.
Each ensemble classifier was trained to detect voxels part of a specific structure (either airway, fissure, nodule,
vessel, or parenchyma). The feature set consisted of voxel attenuation and a small number of features based on the
eigenvalues of the Hessian matrix (used to differentiate structures by shape) computed at multiple smoothing scales
to improve the detection of both large and small structures. When each ensemble classifier was composed of 20
weak classifiers, the AUC values for the airway, fissure, nodule, vessel, and parenchyma classifiers were 0.984 ±
0.011, 0.949 ± 0.009, 0.945 ± 0.018, 0.953 ± 0.016, and 0.931± 0.015 respectively.
Automated arterial input function identification using iterative self organizing maps
Author(s):
Jinesh J. Jain;
John O. Glass;
Wilburn E. Reddick
Show Abstract
Quantification of cerebral blood flow and volume using dynamic-susceptibility contrast MRI relies on deconvolution
with the arterial input function (AIF) - commonly estimated from signal changes in a major artery. Manual selection of
AIF is user-dependent and typical selection in primary arteries leads to errors due to bolus delay and dispersion. An AIF
sampled form the primary as well as the peripheral arteries should minimize these errors. We present a fully automated
technique for the identification of the AIF by classifying the pixels in the imaging set into unique classes using a
Kohonen self organizing map, followed by an iterative refinement of the previous selections. Validation was performed
across 31 pediatric patients by comparison with manually identified AIF and a recently published automated AIF
technique. Our technique consistently yielded higher bolus peak heights and over 50% increase in the area under the first
pass, therefore lowering the values obtained for blood flow and volume. This technique provides a robust and accurate
estimation of the arterial input function and can easily be adapted to extract the AIF locally, regionally or globally as
suitable to the analysis.
A probabilistic level set formulation for interactive organ segmentation
Author(s):
Daniel Cremers;
Oliver Fluck;
Mikael Rousson;
Shmuel Aharon
Show Abstract
Level set methods have become increasingly popular as a framework for image segmentation. Yet when used as
a generic segmentation tool, they suffer from an important drawback: Current formulations do not allow much
user interaction. Upon initialization, boundaries propagate to the final segmentation without the user being able
to guide or correct the segmentation. In the present work, we address this limitation by proposing a probabilistic
framework for image segmentation which integrates input intensity information and user interaction on equal
footings. The resulting algorithm determines the most likely segmentation given the input image and the user
input. In order to allow a user interaction in real-time during the segmentation, the algorithm is implemented
on a graphics card and in a narrow band formulation.
A general theory of image segmentation: level set segmentation in the fuzzy connectedness framework
Author(s):
Krzysztof Chris Ciesielski;
Jayaram K. Udupa
Show Abstract
In the current vast image segmentation literature, there is a serious lack of methods that would allow theoretical
comparison of the algorithms introduced by using different mathematical methodologies. The main goal of this
article is to introduce a general theoretical framework for image segmentation that would allow such comparison.
The framework is based on the formal definitions designed to answer the following fundamental questions: What
is the relation between an idealized image and its digital representation? What properties a segmentation
algorithm must satisfy to be acknowledged as acceptable? What does it mean that a digital image segmentation
algorithm truly approximates an idealized segmentation model? We use the formulated framework to analyze
the front propagation (FP) level set algorithm of Malladi, Sethian, and Vemuri and compare it with the fuzzy
connectedness family of algorithms. In particular, we prove that the FP algorithm is weakly model-equivalent
with the absolute fuzzy connectedness algorithm of Udupa and Samarasekera used with gradient based afinity.
Experimental evidence of this equivalence is also provided. The presented theoretical framework can be used to
analyze any arbitrary segmentation algorithm. This line of investigation is a subject of our forthcoming work.
3D surface parameterization using manifold learning for medial shape representation
Author(s):
Aaron D. Ward;
Ghassan Hamarneh
Show Abstract
The choice of 3D shape representation for anatomical structures determines the effectiveness with which segmentation,
visualization, deformation, and shape statistics are performed. Medial axis-based shape representations
have attracted considerable attention due to their inherent ability to encode information about the natural
geometry of parts of the anatomy. In this paper, we propose a novel approach, based on nonlinear manifold
learning, to the parameterization of medial sheets and object surfaces based on the results of skeletonization.
For each single-sheet figure in an anatomical structure, we skeletonize the figure, and classify its surface points
according to whether they lie on the upper or lower surface, based on their relationship to the skeleton points.
We then perform nonlinear dimensionality reduction on the skeleton, upper, and lower surface points, to find
the intrinsic 2D coordinate system of each. We then center a planar mesh over each of the low-dimensional
representations of the points, and map the meshes back to 3D using the mappings obtained by manifold learning.
Correspondence between mesh vertices, established in their intrinsic 2D coordinate spaces, is used in order
to compute the thickness vectors emanating from the medial sheet. We show results of our algorithm on real
brain and musculoskeletal structures extracted from MRI, as well as an artificial multi-sheet example. The main
advantages to this method are its relative simplicity and noniterative nature, and its ability to correctly compute
nonintersecting thickness vectors for a medial sheet regardless of both the amount of coincident bending and
thickness in the object, and of the incidence of local concavities and convexities in the object's surface.
A learning-based automatic clinical organ segmentation in medical images
Author(s):
Xiaoqing Liu;
Jagath Samarabandu;
Shuo Li;
Ian Ross;
Greg Garvin
Show Abstract
Image segmentation plays an important role in medical image analysis and visualization since it greatly enhances
the clinical diagnosis. Although many algorithms have been proposed, it is challenging to achieve an automatic
clinical organ segmentation which requires speed and robustness. Automatically segmenting cardiac Magnetic
Resonance Imaging (MRI) image is extremely challenging due to the artifacts of cardiac motion and characteristics
of MRI. Moreover many of the existing algorithms are specific to a particular view of cardiac MRI images.
We proposed a generic view-independent, learning-based method to automatically segment cardiac MRI images,
which uses machine learning techniques and the geometric shape information. A main feature of our contribution
is the fact that the proposed algorithm can use a training set containing a mix of various views and is able to
successfully segment any given views. The proposed method consists of four stages. First, we partition the input
image into a number of image regions based on their intensity characteristics. Then, we calculate the pre-selected
feature descriptions for each generated region and use a trained classi.er to learn the conditional probabilities
for every pixel based on the calculated features. In this paper, we use the Support Vector Machine (SVM) to
train our classifier. The learned conditional probabilities of every pixel are then fed into an energy function
to segment the input image. We optimize our energy function with graph cuts. Finally, domain knowledge is
applied to verify the segmentation. Experimental results show that this method is very efficient and robust with
respect to image views, slices and motion phases. The method also has the potential to be imaging modality
independent as the proposed algorithm is not specific to a particular imaging modality.
HWT - hybrid watershed transform: optimal combination of hierarchical interactive and automated image segmentation
Author(s):
Horst K. Hahn;
Markus T. Wenzel;
Johann Drexl;
Susanne Zentis;
Heinz-Otto Peitgen
Show Abstract
In quantitative medical imaging and therapy planning, the optimal combination of automated and interactively defined information
is crucial for image segmentation methods to be both efficient and effective. We propose to combine an efficient
hierarchical region merging scheme that collects per-region statistics across hierarchy levels with a trainable classification
engine that facilitates automated region labeling based on an arbitrary number of reference segmentations. When applying
the classification engine, we propose to use a corridor of non-classified regions resulting in a sparse labeling with
extremely low false-classification rate, and to attribute labels to the remaining basins through successive merging with
ready-labeled basins. The proposed hierarchical region merging scheme also permits to efficiently include interactively defined
labels. We denominate this general approach as Hybrid Hierarchical Interactive Image Segmentation Scheme (HIS2).
More specifically, we present an extension of the Interactive Watershed Transform, which we combine with a trainable two-class
Support Vector Machine based on Gaussian radial basis functions. Finally, we present a novel asymmetric marker
scheme, which provides a powerful means of regionally correcting remaining inaccuracies while preserving full detail of
the automatic labeling procedure. We denominate the complete algorithm as Hybrid Watershed Transform (HWT), which
we apply to one challenging segmentation problem in clinical imaging, namely efficient bone removal in large computed
tomography angiographic data sets. Efficiency and accuracy of the proposed methodology is evaluated on multi-slice images
from nine different sites. As a result, its ability to rapidly and automatically generate robust and precise segmentation
results in combination with a versatile manual correction mechanism could be proven without requiring specific anatomical
or geometrical models.
Multi-scale shape prior using wavelet packet representation and independent component analysis
Author(s):
Rami Zewail;
Ahmed Elsafi;
Nelson Durdle
Show Abstract
Statistical shape priors try to faithfully represent the full range of biological variations in anatomical structures. These
priors are now widely used to restrict shapes; obtained in applications like segmentation and registration; to a subspace
of plausible shapes. Principle component analysis (PCA) is commonly used to represent modes of shape variations in a
training set. In an attempt to face some of the limitations in the PCA-based shape model, this paper describes a new
multi-scale shape prior using independent component analysis (ICA) and adaptive wavelet decomposition. Within a
best basis selection framework, the proposed method benefits from the multi-scale nature of wavelet packets, and the
capability of ICA to capture higher order statistics in wavelet subspaces. The proposed approach is evaluated using
contours from digital x-ray images of five vertebrae of human spine. We demonstrate the ability of the proposed shape
prior to capture both local and global shape variations, even with limited number of training samples. Our results also
show the performance gains of the ICA-based analysis for the wavelet sub-spaces, as compared to PCA-based analysis
approach.
Automatic detection of diseased regions in knee cartilage
Author(s):
Arish A. Qazi;
Erik B. Dam;
Ole F. Olsen;
Mads Nielsen;
Claus Christiansen M.D.
Show Abstract
Osteoarthritis (OA) is a degenerative joint disease characterized by articular cartilage degradation. A central problem in
clinical trials is quantification of progression and early detection of the disease. The accepted standard for evaluating OA
progression is to measure the joint space width from radiographs however; there the cartilage is not visible. Recently
cartilage volume and thickness measures from MRI are becoming popular, but these measures don't account for the
biochemical changes undergoing in the cartilage before cartilage loss even occurs and therefore are not optimal for early
detection of OA. As a first step, we quantify cartilage homogeneity (computed as the entropy of the MR intensities) from
114 automatically segmented medial compartments of tibial cartilage sheets from Turbo 3D T1 sequences, from subjects
with no, mild or severe OA symptoms. We show that homogeneity is a more sensitive technique than volume
quantification for detecting early OA and for separating healthy individuals from diseased. During OA certain areas of
the cartilage are affected more and it is believed that these are the load-bearing regions located at the center of the
cartilage. Based on the homogeneity framework we present an automatic technique that partitions the region on the
cartilage that contributes to maximum homogeneity discrimination. These regions however, are more towards the noncentral
regions of the cartilage. Our observation will provide valuable clues to OA research and may lead to improving
treatment efficacy.
Shape based segmentation of MRIs of the bones in the knee using phase and intensity information
Author(s):
Jurgen Fripp;
Pierrick Bourgeat;
Stuart Crozier;
Sébastien Ourselin
Show Abstract
The segmentation of the bones from MR images is useful for performing subsequent segmentation and quantitative
measurements of cartilage tissue. In this paper, we present a shape based segmentation scheme for the bones
that uses texture features derived from the phase and intensity information in the complex MR image. The
phase can provide additional information about the tissue interfaces, but due to the phase unwrapping problem,
this information is usually discarded. By using a Gabor filter bank on the complex MR image, texture features
(including phase) can be extracted without requiring phase unwrapping. These texture features are then analyzed
using a support vector machine classifier to obtain probability tissue matches. The segmentation of the bone is
fully automatic and performed using a 3D active shape model based approach driven using gradient and texture
information. The 3D active shape model is automatically initialized using a robust affine registration. The
approach is validated using a database of 18 FLASH MR images that are manually segmented, with an average
segmentation overlap (Dice similarity coefficient) of 0.92 compared to 0.9 obtained using the classifier only.
Probabilistic retinal vessel segmentation
Author(s):
Chang-Hua Wu;
Gady Agam
Show Abstract
Optic fundus assessment is widely used for diagnosing vascular and non-vascular pathology. Inspection of the
retinal vasculature may reveal hypertension, diabetes, arteriosclerosis, cardiovascular disease and stroke. Due to
various imaging conditions retinal images may be degraded. Consequently, the enhancement of such images and
vessels in them is an important task with direct clinical applications. We propose a novel technique for vessel
enhancement in retinal images that is capable of enhancing vessel junctions in addition to linear vessel segments.
This is an extension of vessel filters we have previously developed for vessel enhancement in thoracic CT scans.
The proposed approach is based on probabilistic models which can discern vessels and junctions. Evaluation
shows the proposed filter is better than several known techniques and is comparable to the state of the art when
evaluated on a standard dataset. A ridge-based vessel tracking process is applied on the enhanced image to
demonstrate the effectiveness of the enhancement filter.
Automated segmentation of intraretinal layers from macular optical coherence tomography images
Author(s):
Mona Haeker;
Milan Sonka;
Randy Kardon M.D.;
Vinay A. Shah;
Xiaodong Wu;
Michael D. Abràmoff
Show Abstract
Commercially-available optical coherence tomography (OCT) systems (e.g., Stratus OCT-3) only segment and
provide thickness measurements for the total retina on scans of the macula. Since each intraretinal layer may be
affected differently by disease, it is desirable to quantify the properties of each layer separately. Thus, we have
developed an automated segmentation approach for the separation of the retina on (anisotropic) 3-D macular
OCT scans into five layers. Each macular series consisted of six linear radial scans centered at the fovea. Repeated
series (up to six, when available) were acquired for each eye and were first registered and averaged together,
resulting in a composite image for each angular location. The six surfaces defining the five layers were then found
on each 3-D composite image series by transforming the segmentation task into that of finding a minimum-cost
closed set in a geometric graph constructed from edge/regional information and a priori-determined surface
smoothness and interaction constraints. The method was applied to the macular OCT scans of 12 patients with
unilateral anterior ischemic optic neuropathy (corresponding to 24 3-D composite image series). The boundaries
were independently defined by two human experts on one raw scan of each eye. Using the average of the experts'
tracings as a reference standard resulted in an overall mean unsigned border positioning error of 6.7 ± 4.0
&mgr;m, with five of the six surfaces showing significantly lower mean errors than those computed between the two
observers (p < 0.05, pixel size of 50 × 2 &mgr;m).
Segmentation of the optic nerve head combining pixel classification and graph search
Author(s):
Michael B. Merickel Jr.;
Michael D. Abràmoff;
Milan Sonka;
Xiaodong Wu
Show Abstract
Early detection of glaucoma is essential to minimizing the risk of visual loss. It has been shown that a good
predictor of glaucoma is the cup-to-disc ratio of the optic nerve head. This paper presents an automated method
to segment the optic disc. Our approach utilizes pixel feature selection to train a feature set to recognize the
region of the disc. Soft pixel classification is used to generate a probability map of the disc. A new cost function
is developed for maximizing the probability of the region within the disc. The segmentation of the image is done
using a novel graph search algorithm capable of detecting the border maximizing the probability of the disc.
The combination of graph search and pixel classification enables us to incorporate large feature sets into the cost
function design, which is critical for segmentation of the optic disc. Our results are validated against a reference
standard of 82 datasets and compared to the manual segmentations of 3 glaucoma fellows.
Automatic delineation of the optic nerves and chiasm on CT images
Author(s):
Michael Gensheimer;
Anthony Cmelak;
Kenneth Niermann;
Benoit M. Dawant
Show Abstract
Delineating critical structures for radiotherapy of the brain is required for advanced radiotherapy technologies to
determine if the dose from the proposed treatment will impair the functionality of the structures. Employing an
automatic segmentation computer module in the radiation oncology treatment planning process has the potential to
significantly increase the efficiency, cost-effectiveness, and, ultimately, clinical outcome of patients undergoing
radiation therapy. In earlier work, we have shown that atlas-based segmentation of large structures such as the brainstem
or the cerebellum was an achievable objective. We have also shown that smaller structures such as the optic nerves or
optic chiasm were more difficult to segment automatically. In this work, we present an extension to this approach in
which atlas-based segmentation is followed by a series of additional steps. We show that this new approach substantially
improves our previous results. We also show that we can segment CT images alone when we previously relied on a
combination of MR and CT images.
A new general tumor segmentation framework based on radial basis function energy minimization with a validation study on LIDC lung nodules
Author(s):
Roland Opfer;
Rafael Wiemker
Show Abstract
In this paper we describe a new general tumor segmentation approach, which combines energy minimization
methods with radial basis function surface modelling techniques. A tumor is mathematically described by a
superposition of radial basis functions. In order to find the optimal segmentation we minimize a certain energy
functional. Similar to snake segmentation our energy functional is a weighted sum of an internal and an external
energy. The internal energy is the bending energy of the surface and can be computed from the radial basis
function coefficients directly. Unlike to snake segmentation we do not have to derive and solve Euler-Lagrange
equations. We can solve the minimization problem by standard optimization techniques. Our approach is not
restricted to one single imaging modality and it can be applied to 2D, 3D or even 4D data. In addition, our
segmentation method makes several simple and intuitive user interactions possible. For instance, we can enforce
interpolation of certain user defined points. We validate our new method with lung nodules on CT data. A
validation on clinical data is carried out with the 91 publicly available CT lung images provided by the lung
image database consortium (LIDC). The LIDC also provides ground truth lists by 4 different radiologists. We
discuss the inter-observer variability of the 4 radiologists and compare their segmentations with the segmentation
results of the presented algorithm.
Tissue tracking: applications for brain MRI classification
Author(s):
John Melonakos;
Yi Gao;
Allen Tannenbaum
Show Abstract
Bayesian classification methods have been extensively used in a variety of image processing applications, including
medical image analysis. The basic procedure is to combine data-driven knowledge in the likelihood terms with
clinical knowledge in the prior terms to classify an image into a pre-determined number of classes. In many
applications, it is difficult to construct meaningful priors and, hence, homogeneous priors are assumed. In this
paper, we show how expectation-maximization weights and neighboring posterior probabilities may be combined
to make intuitive use of the Bayesian priors. Drawing upon insights from computer vision tracking algorithms,
we cast the problem in a tissue tracking framework. We show results of our algorithm on the classification of
gray and white matter along with surrounding cerebral spinal fluid in brain MRI scans. We show results of our
algorithm on 20 brain MRI datasets along with validation against expert manual segmentations.
Vertebral fracture classification
Author(s):
Marleen de Bruijne;
Paola C. Pettersen;
László B. Tankó;
Mads Nielsen
Show Abstract
A novel method for classification and quantification of vertebral fractures from X-ray images is presented. Using
pairwise conditional shape models trained on a set of healthy spines, the most likely unfractured shape is
estimated for each of the vertebrae in the image. The difference between the true shape and the reconstructed
normal shape is an indicator for the shape abnormality. A statistical classification scheme with the two shapes
as features is applied to detect, classify, and grade various types of deformities.
In contrast with the current (semi-)quantitative grading strategies this method takes the full shape into
account, it uses a patient-specific reference by combining population-based information on biological variation
in vertebra shape and vertebra interrelations, and it provides a continuous measure of deformity.
Good agreement with manual classification and grading is demonstrated on 204 lateral spine radiographs
with in total 89 fractures.
Discrimination analysis using multi-object statistics of shape and pose
Author(s):
Kevin Gorczowski;
Martin Styner;
Ja Yeon Jeong;
J. S. Marron;
Joseph Piven;
Heather Cody Hazlett;
Stephen M. Pizer;
Guido Gerig
Show Abstract
A main focus of statistical shape analysis is the description of variability of a population of geometric objects. In this paper,
we present work towards modeling the shape and pose variability of sets of multiple objects. Principal geodesic analysis
(PGA) is the extension of the standard technique of principal component analysis (PCA) into the nonlinear Riemannian
symmetric space of pose and our medial m-rep shape description, a space in which use of PCA would be incorrect.
In this paper, we discuss the decoupling of pose and shape in multi-object sets using different normalization settings.
Further, we introduce methods of describing the statistics of object pose and object shape, both separately and simultaneously
using a novel extension of PGA. We demonstrate our methods in an application to a longitudinal pediatric autism
study with object sets of 10 subcortical structures in a population of 47 subjects. The results show that global scale accounts
for most of the major mode of variation across time. Furthermore, the PGA components and the corresponding distribution
of different subject groups vary significantly depending on the choice of normalization, which illustrates the importance
of global and local pose alignment in multi-object shape analysis. Finally, we present results of using distance weighted
discrimination analysis (DWD) in an attempt to use pose and shape features to separate subjects according to diagnosis, as
well as visualize discriminating differences.
Active index model: a unique approach for regional quantitative morphometry in longitudinal and cross-sectional studies
Author(s):
P. K. Saha;
H. Zhang;
M. Sonka;
G. E. Christensen;
C. S. Rajapakse
Show Abstract
Recent advancements in digital medical imaging have opened avenues for quantitative analyses of different volumetric
and morphometric indices in response to a disease or a treatment. However, a major challenge in performing such an
analysis is the lack of a technology of building a mean anatomic space (MAS) that allows mapping data of a given
subject onto MAS. This approach leads to a tool for point-by-point regional analysis and comparison of quantitative
indices for data coming from a longitudinal or transverse study. Toward this goal, we develop a new computation
technique, called Active Index Model (AIM), which is a unique tool to solve the stated problem. AIM consists of three
building blocks - (1) development of MAS for a particular anatomic site, (2) mapping a specific data onto MAS, (3)
regional statistical analysis of data from different populations assessing regional response to a disease or treatment
progression. The AIM presented here is built at the training phase from two known populations (e.g., normal and
diseased) which will be immediately ready for diagnostic purpose in a subject whose clinical status is unknown. AIM
will be useful for both cross sectional and longitudinal studies and for early diagnostic. This technique will be a vital
tool for understanding regional response of a disease or treatment at various stages of its progression. This method has
been applied for analyzing regional trabecular bone structural distribution in rabbit femur via micro-CT imaging and to
localize the affected myocardial regions from cardiac MR data.
Craniofacial statistical deformation models of wild-type mice and Crouzon mice
Author(s):
Hildur Ólafsdóttir;
Tron A. Darvann;
Bjarne K. Ersbøll;
Nuno V. Hermann;
Estanislao Oubel;
Rasmus Larsen;
Alejandro F. Frangi;
Per Larsen;
Chad A. Perlyn;
Gillian M. Morriss-Kay;
Sven Kreiborg
Show Abstract
Crouzon syndrome is characterised by premature fusion of cranial sutures and synchondroses leading to craniofacial
growth disturbances. The gene causing the syndrome was discovered approximately a decade ago and
recently the first mouse model of the syndrome was generated. In this study, a set of Micro CT scans of the heads
of wild-type (normal) mice and Crouzon mice were investigated. Statistical deformation models were built to
assess the anatomical differences between the groups, as well as the within-group anatomical variation. Following
the approach by Rueckert et al. we built an atlas using B-spline-based nonrigid registration and subsequently,
the atlas was nonrigidly registered to the cases being modelled. The parameters of these registrations were then
used as input to a PCA. Using different sets of registration parameters, different models were constructed to
describe (i) the difference between the two groups in anatomical variation and (ii) the within-group variation.
These models confirmed many known traits in the wild-type and Crouzon mouse craniofacial anatomy. However,
they also showed some new traits.
Segmentation of cardiac MR and CT image sequences using model-based registration of a 4D statistical model
Author(s):
Dimitrios Perperidis;
Raad Mohiaddin;
Philip Edwards;
Daniel Rueckert
Show Abstract
In this paper we present a novel approach to the problem of fitting a 4D statistical shape model of the myocardium to
cardiac MR and CT image sequences. The 4D statistical model has been constructed from 25 cardiac MR image sequences
from normal volunteers. The model is controlled by two sets of shape parameters. The first set of shape parameters
describes shape changes due to inter-subject variability while the second set of shape parameters describes shape changes
due to intra-subject variability, i.e. the cardiac contraction and relaxation. A novel fitting approach is used to estimate the
optimal parameters of the cardiac shape model. The fitting of the model is performed simultaneously for the entire image
sequences. The method has been tested on 5 cardiac MR image sequences. Furthermore, we have also tested the method
using a cardiac CT image sequence. The result demonstrate that the method is not only able to fit the 4D model to cardiac
MR image sequences, but also to cardiac image sequences from a different modality (CT).
Oncological image analysis: medical and molecular image analysis
Author(s):
Michael Brady
Show Abstract
This paper summarises the work we have been doing on joint projects with GE
Healthcare on colorectal and liver cancer, and with Siemens Molecular Imaging on
dynamic PET. First, we recall the salient facts about cancer and oncological image
analysis. Then we introduce some of the work that we have done on analysing clinical
MRI images of colorectal and liver cancer, specifically the detection of lymph nodes and
segmentation of the circumferential resection margin. In the second part of the paper, we
shift attention to the complementary aspect of molecular image analysis, illustrating our
approach with some recent work on: tumour acidosis, tumour hypoxia, and multiply drug
resistant tumours.
Evaluation of four mammographic density measures on HRT data
Author(s):
Jakob Raundahl;
Marco Loog;
Paola Pettersen;
Mads Nielsen
Show Abstract
Numerous studies have investigated the relation between mammographic density and breast cancer risk. These
studies indicate that women with dense breasts have a four to six fold risk increase. There is currently no gold
standard for automatic assessment of mammographic density.
In previous work two different automated methods for measuring the effect of HRT w.r.t. changes in breast
density have been presented. One is a percentage density based on an adaptive global threshold, and the other is
an intensity invariant measure, which provides structural information orthogonal to intensity-based methods. In
this article we investigate the ability to detect density changes induced by HRT for these measures and compare
to a radiologist's BI-RADS rating and interactive threshold percentage density.
In the experiments, two sets of mammograms of 80 patients from a double blind, placebo controlled HRT
experiment are used. The p-values for the statistical significance of the separation of density means, for the HRT
group and the placebo group at end of study, are 0.2, 0.1, 0.02 and 0.02 for the automatic threshold, BI-RADS,
the stripyness and the interactive threshold respectively.
Validation of voxel-based morphometry (VBM) based on MRI
Author(s):
Xueyu Yang;
Kewei Chen;
Xiaojuan Guo;
Li Yao
Show Abstract
Voxel-based morphometry (VBM) is an automated and objective image analysis technique for detecting differences in
regional concentration or volume of brain tissue composition based on structural magnetic resonance (MR) images.
VBM has been used widely to evaluate brain morphometric differences between different populations, but there isn't an
evaluation system for its validation until now. In this study, a quantitative and objective evaluation system was
established in order to assess VBM performance. We recruited twenty normal volunteers (10 males and 10 females, age
range 20-26 years, mean age 22.6 years). Firstly, several focal lesions (hippocampus, frontal lobe, anterior cingulate,
back of hippocampus, back of anterior cingulate) were simulated in selected brain regions using real MRI data. Secondly,
optimized VBM was performed to detect structural differences between groups. Thirdly, one-way ANOVA and post-hoc
test were used to assess the accuracy and sensitivity of VBM analysis. The results revealed that VBM was a good
detective tool in majority of brain regions, even in controversial brain region such as hippocampus in VBM study.
Generally speaking, much more severity of focal lesion was, better VBM performance was. However size of focal lesion
had little effects on VBM analysis.
Non-rigid registration methods assessment of 3D CT images for head-neck radiotherapy
Author(s):
Adriane Parraga;
Johanna Pettersson;
Altamiro Susin;
Mathieu De Craene;
Benoît Macq
Show Abstract
Intensity Modulated Radiotherapy is a new technique enabling the sculpting of the 3D radiation dose. It enables
to modulate the delivery of the dose inside the malignant areas and constrain the radiation plan for protecting
important functional areas. It also raises the issues of adequacy and accuracy of the selection and delineation
of the target volumes. The delineation in the patient image of the tumor volume is highly time-consuming and
requires considerable expertise. In this paper we focus on atlas based automatic segmentation of head and neck
patients and compare two non-rigid registration methods: B-Spline and Morphons. To assess the quality of each
method, we took a set of four 3D CT patient's images previously segmented by a doctor with the organs at
risk. After a preliminary affine registration, both non-rigid registration algorithms were applied to match the
patient and atlas images. Each deformation field, resulted from the non-rigid deformation, was applied on the
masks corresponding to segmented regions in the atlas. The atlas based segmentation masks were compared to
manual segmentations performed by an expert. We conclude that Morphons has performed better for matching
all structures being considered, improving in average 11% the segmentation.
Computation of the mid-sagittal plane in diffusion tensor MR brain images
Author(s):
Sylvain Prima;
Nicolas Wiest-Daesslé
Show Abstract
We propose a method for the automated computation of the mid-sagittal plane of the brain in diffusion tensor
MR images. We estimate this plane as the one that best superposes the two hemispheres of the brain by reflection
symmetry. This is done via the automated minimisation of a correlation-type global criterion over the tensor
image. The minimisation is performed using the NEWUOA algorithm in a multiresolution framework. We
validate our algorithm on synthetic diffusion tensor MR images. We quantitatively compare this computed plane
with similar planes obtained from scalar diffusion images (such as FA and ADC maps) and from the B0 image
(that is, without diffusion sensitisation). Finally, we show some results on real diffusion tensor MR images.
Diffusion tensor sharpening improves white matter tractography
Author(s):
Maxime Descoteaux;
Christophe Lenglet;
Rachid Deriche
Show Abstract
Diffusion Tensor Imaging (DTI) is currently a widespread technique to infer white matter architecture in the
human brain. An important application of DTI is to understand the anatomical coupling between functional
cortical regions of the brain. To solve this problem, anisotropy maps are insufficient and fiber tracking methods
are used to obtain the main fibers. While the diffusion tensor (DT) is important to obtain anisotropy maps
and apparent diffusivity of the underlying tissue, fiber tractography using the full DT may result in diffusive
tracking that leaks into unexpected regions. Sharpening is thus of utmost importance to obtain complete and
accurate tracts. In the tracking literature, only heuristic methods have been proposed to deal with this problem.
We propose a new tensor sharpening transform. Analogously to the general issue with the diffusion and fiberOrientation Distribution Function (ODF) encountered when working with High Angular Resolution Diffusion
Imaging (HARDI), we show how to transform the diffusion tensors into so-called fiber tensors. We demonstrate
that this tensor transform is a natural pre-processing task when one is interested in fiber tracking. It also leads
to a dramatic improvement of the tractography results obtained by front propagation techniques on the full
diffusion tensor. We compare and validate sharpening and tracking results on synthetic data and on known fiber
bundles in the human brain.
DT-MRI segmentation using graph cuts
Author(s):
Yonas T. Weldeselassie;
Ghassan Hamarneh
Show Abstract
An important problem in medical image analysis is the segmentation of anatomical regions of interest. Once
regions of interest are segmented, one can extract shape, appearance, and structural features that can be analyzed
for disease diagnosis or treatment evaluation. Diffusion tensor magnetic resonance imaging (DT-MRI) is
a relatively new medical imaging modality that captures unique water diffusion properties and fiber orientation
information of the imaged tissues. In this paper, we extend the interactive multidimensional graph cuts segmentation
technique to operate on DT-MRI data by utilizing latest advances in tensor calculus and diffusion tensor
dissimilarity metrics. The user interactively selects certain tensors as object ("obj") or background ("bkg") to
provide hard constraints for the segmentation. Additional soft constraints incorporate information about both
regional tissue diffusion as well as boundaries between tissues of different diffusion properties. Graph cuts are
used to find globally optimal segmentation of the underlying 3D DT-MR image among all segmentations satisfying
the constraints. We develop a graph structure from the underlying DT-MR image with the tensor voxels
corresponding to the graph vertices and with graph edge weights computed using either Log-Euclidean or the
J-divergence tensor dissimilarity metric. The topology of our segmentation is unrestricted and both obj and bkg
segments may consist of several isolated parts. We test our method on synthetic DT data and apply it to real
2D and 3D MRI, providing segmentations of the corpus callosum in the brain and the ventricles of the heart.
Restoration of MRI data for field nonuniformities using high order neighborhood statistics
Author(s):
Stathis Hadjidemetriou;
Colin Studholme;
Susanne Mueller;
Mike Weiner;
Norbert Schuff
Show Abstract
MRI at high magnetic fields (> 3.0 T) is complicated by strong inhomogeneous radio-frequency fields, sometimes
termed the "bias field". These lead to nonuniformity of image intensity, greatly complicating further analysis
such as registration and segmentation. Existing methods for bias field correction are effective for 1.5 T or
3.0 T MRI, but are not completely satisfactory for higher field data. This paper develops an effective bias field
correction for high field MRI based on the assumption that the nonuniformity is smoothly varying in space. Also,
nonuniformity is quantified and unmixed using high order neighborhood statistics of intensity cooccurrences.
They are computed within spherical windows of limited size over the entire image. The restoration is iterative
and makes use of a novel stable stopping criterion that depends on the scaled entropy of the cooccurrence
statistics, which is a non monotonic function of the iterations; the Shannon entropy of the cooccurrence statistics
normalized to the effective dynamic range of the image. The algorithm restores whole head data, is robust to
intense nonuniformities present in high field acquisitions, and is robust to variations in anatomy. This algorithm
significantly improves bias field correction in comparison to N3 on phantom 1.5 T head data and high field 4 T
human head data.
Dynamic field mapping and distortion correction for fMRI
Author(s):
Ning Xu;
J. Michael Fitzpatrick
Show Abstract
Echo planar images (EPI) suffer from geometric distortions caused by static-field inhomogeneity. Correction techniques
have been suggested based on field maps obtained before or after the EPI acquisition. However, when a relatively long
time series of images is required, as in fMRI studies, the inhomogeneity varies from image to image because of gross
motion and physiological activity such as respiration and cardiac motion. It is not ideal to approximate the varying maps
of field inhomogeneity by means of one map. To overcome this limitation, multiple field maps are desirable for
correcting the distortions that are dynamically changing. Some groups have explored the possibility of acquiring
multiple field maps, but either the increased scan time is not affordable for most fMRI studies or the field map
acquisition is embedded in EPI pulse sequence, which produces a map of insufficient resolution to support a complete
distortion correction. In this paper, we propose a dynamic field mapping technique that uses a single reference image and
a single corresponding acquired field map and the phase information extracted from the complex image data of each EPI
image in the time series. From this information, a separate field map is then derived individually for each EPI image.
The derived field maps are then used for distortion correction. This approach, which is particularly suitable for fMRI
studies, can correct for image distortion that varies dynamically without sacrificing temporal resolution. We validate this
technique using simulated data, and the experimental results show improved performance in comparison to correction
using a single field map.
Analysis of free breathing motion using artifact reduced 4D CT image data
Author(s):
Jan Ehrhardt;
Rene Werner;
Thorsten Frenzel;
Wei Lu;
Daniel Low;
Heinz Handels
Show Abstract
The mobility of lung tumors during the respiratory cycle is a source of error in radiotherapy treatment planning.
Spatiotemporal CT data sets can be used for studying the motion of lung tumors and inner organs during the
breathing cycle.
We present methods for the analysis of respiratory motion using 4D CT data in high temporal resolution. An
optical flow based reconstruction method was used to generate artifact-reduced 4D CT data sets of lung cancer
patients. The reconstructed 4D CT data sets were segmented and the respiratory motion of tumors and inner
organs was analyzed.
A non-linear registration algorithm is used to calculate the velocity field between consecutive time frames of
the 4D data. The resulting velocity field is used to analyze trajectories of landmarks and surface points. By
this technique, the maximum displacement of any surface point is calculated, and regions with large respiratory
motion are marked. To describe the tumor mobility the motion of the lung tumor center in three orthogonal
directions is displayed. Estimated 3D appearance probabilities visualize the movement of the tumor during the
respiratory cycle in one static image. Furthermore, correlations between trajectories of the skin surface and the
trajectory of the tumor center are determined and skin regions are identified which are suitable for prediction of
the internal tumor motion.
The results of the motion analysis indicate that the described methods are suitable to gain insight into the
spatiotemporal behavior of anatomical and pathological structures during the respiratory cycle.
An algorithm to stabilize a sequence of thermal brain images
Author(s):
Boris Kovalerchuk;
Joseph Lemley;
Alexander M. Gorbach
Show Abstract
Complex challenges of optical imaging in diagnostics and surgical treatment require accurate image
registration/stabilization methods that remove only unwanted motions. An SIAROI algorithm is proposed for real-time
subpixel registration sequences of intraoperatively acquired infrared (thermal) brain images. SIAROI algorithm is based
upon automatic, localized Subpixel Image Autocorrelation and a user-selected Region of Interest (ROI). Human
expertise about unwanted motions is added through a user-outlined ROI, using a low-accuracy free-hand paintbrush.
SIAROI includes: (a) propagating the user-outlined ROI by selecting pixels in the second image of the sequence, using
the same ROI; (b) producing SROI (sub-pixel ROI) by converting each pixel to k=NxN subpixels; (c) producing new
SROI in the second image by shifting SROI within plus or minus 6k subpixels; (d) finding an optimal autocorrelation
shift (x,y) within 12N that minimizes the Standard Deviation of Differences of Pixel Intensities (SDDPI) between
corresponding ROI pixels in both images, (e) shifting the second image by (x,y), repeating (a)-(e) for successive images
(t,t1). In experiments, a user quickly outlined non-deformable ROI (such as bone) in the first image of a sequence.
Alignment of 100 brain images (25600x25600 pixel search, after every pixel was converted to 100 sub-pixels), took ~3
minutes, which is 200 times faster (with a 0.1=ROI/image ratio) than global auto-correlation. SIAROI improved frame
alignment by a factor of two, relative to a Global Auto-correlation and Tie-points-based registration methods, as
measured by reductions in the SDDPI.
3D reconstruction of highly fragmented bone fractures
Author(s):
Andrew Willis;
Donald Anderson;
Thad Thomas;
Thomas Brown;
J. Lawrence Marsh
Show Abstract
A system for the semi-automatic reconstruction of highly fragmented bone fractures, developed to aid in treatment
planning, is presented. The system aligns bone fragment surfaces derived from segmentation of volumetric
CT scan data. Each fragment surface is partitioned into intact- and fracture-surfaces, corresponding more or less
to cortical and cancellous bone, respectively. A user then interactively selects fracture-surface patches in pairs
that coarsely correspond. A final optimization step is performed automatically to solve the N-body rigid alignment
problem. The work represents the first example of a 3D bone fracture reconstruction system and addresses
two new problems unique to the reconstruction of fractured bones: (1) non-stationary noise inherent in surfaces
generated from a difficult segmentation problem and (2) the possibility that a single fracture surface on a
fragment may correspond to many other fragments.
Determination of 3D location and rotation of lumbar vertebrae in CT images by symmetry-based auto-registration
Author(s):
Tomaž Vrtovec;
Boštjan Likar;
Franjo Pernuš
Show Abstract
Quantitative measurement of vertebral rotation is important in surgical planning, analysis of surgical results, and
monitoring of the progression of spinal deformities. However, many established and newly developed techniques
for measuring axial vertebral rotation do not exploit three-dimensional (3D) information, which may result in
virtual axial rotation because of the sagittal and coronal rotation of vertebrae. We propose a novel automatic
approach to the measurement of the location and rotation of vertebrae in 3D without prior volume reformation,
identification of appropriate cross-sections or aid by statistical models. The vertebra under investigation is
encompassed by a mask in the form of an elliptical cylinder in 3D, defined by its center of rotation and the
rotation angles. We exploit the natural symmetry of the vertebral body, vertebral column and vertebral canal by
dividing the vertebral mask by its mid-axial, mid-sagittal and mid-coronal plane, so that the obtained volume
pairs contain symmetrical parts of the observed anatomy. Mirror volume pairs are then simultaneously registered
to each other by robust rigid auto-registration, using the weighted sum of absolute differences between the
intensities of the corresponding volume pairs as the similarity measure. The method was evaluated on 50 lumbar
vertebrae from normal and scoliotic computed tomography (CT) spinal scans, showing relatively large capture
ranges and distinctive maxima at the correct locations and rotation angles. The proposed method may aid the
measurement of the dimensions of vertebral pedicles, foraminae and canal, and may be a valuable tool for clinical
evaluation of the spinal deformities in 3D.
Compensation of global movement for improved tracking of cells in time-lapse confocal microscopy image sequences
Author(s):
Il-Han Kim;
William J. Godinez;
Nathalie Harder;
Felipe Mora-Bermúdez;
Jan Ellenberg;
Roland Eils;
Karl Rohr
Show Abstract
A bottleneck for high-throughput screening of live cells is the automated analysis of the generated image data.
An important application in this context is the evaluation of the duration of cell cycle phases from confocal time-lapse
microscopy image sequences, which typically involves a tracking step. The tracking step is an important
part since it relates segmented cells from one time frame to the next. However, a main problem is that often the
movement of single cells is superimposed with a global movement. The reason for the global movement lies in
the high-throughput acquisition of the images and the repositioning of the microscope. If a tracking algorithm
is applied to these images then only a superposition of the microscope movement and the cell movement is
determined but not the real movement of the cells. In addition, since the displacements are generally larger, it
is more difficult to determine the correspondences between cells. We have developed a phase-correlation based
approach to compensate for the global movement of the microscope by registering each image of a sequence to a
reference coordinate system. Our approach uses a windowing function in the spatial domain of the cross-power
spectrum. This allows to determine the global movement by direct evaluation of the phase gradient, avoiding
phase unwrapping. We present experimental results of applying our approach to synthetic and real image
sequences. It turns out that the global movement can well be compensated and thus successfully decouples the
global movement from the individual movement of the cells.
Multimodal image registration of ex vivo 4 Tesla MRI with whole mount histology for prostate cancer detection
Author(s):
Jonathan Chappelow;
Anant Madabhushi;
Mark Rosen;
John Tomaszeweski;
Michael Feldman
Show Abstract
In this paper we present novel methods for registration and subsequent evaluation of whole mount prostate histological
sections to corresponding 4 Tesla ex vivo magnetic resonance imaging (MRI) slices to complement our
existing computer-aided diagnosis (CAD) system for detection of prostatic adenocarcinoma from high resolution
MRI. The CAD system is trained using voxels labeled as cancer on MRI by experts who visually aligned histology
with MRI. To address voxel labeling errors on account of manual alignment and delineation, we have developed
a registration method called combined feature ensemble mutual information (COFEMI) to automatically map
spatial extent of prostate cancer from histology onto corresponding MRI for prostatectomy specimens. Our
method improves over intensity-based similarity metrics (mutual information) by incorporating unique information
from feature spaces that are relatively robust to intensity artifacts and which accentuate the structural
details in the target and template images to be registered. Our registration algorithm accounts for linear gland
deformations in the histological sections resulting from gland fixing and serial sectioning. Following automatic
registration of MRI and histology, cancer extent from histological sections are mapped to the corresponding
registered MRI slices. The manually delineated cancer areas on MRI obtained via manual alignment of histological
sections and MRI are compared with corresponding cancer extent obtained via COFEMI by a novel
registration evaluation technique based on use of non-linear dimensionality reduction (locally linear embedding
(LLE)). The cancer map on MRI determined by COFEMI was found to be significantly more accurate compared
to the manually determined cancer mask. The performance of COFEMI was also found to be superior compared
to image intensity-based mutual information registration.
Analysis of point-to-point lung motion with full inspiration and expiration CT data using non-linear optimization method: optimal geometric assumption model for the effective registration algorithm
Author(s):
Namkug Kim;
Joon Beom Seo;
Jeong Nam Heo;
Suk-Ho Kang
Show Abstract
The study was conducted to develop a simple model for more robust lung registration of volumetric CT data, which is
essential for various clinical lung analysis applications, including the lung nodule matching in follow up CT studies,
semi-quantitative assessment of lung perfusion, and etc. The purpose of this study is to find the most effective reference
point and geometric model based on the lung motion analysis from the CT data sets obtained in full inspiration (In.) and
expiration (Ex.). Ten pairs of CT data sets in normal subjects obtained in full In. and Ex. were used in this study. Two
radiologists were requested to draw 20 points representing the subpleural point of the central axis in each segment. The
apex, hilar point, and center of inertia (COI) of each unilateral lung were proposed as the reference point. To evaluate
optimal expansion point, non-linear optimization without constraints was employed. The objective function is sum of
distances from the line, consist of the corresponding points between In. and Ex. to the optimal point x. By using the nonlinear
optimization, the optimal points was evaluated and compared between reference points. The average distance
between the optimal point and each line segment revealed that the balloon model was more suitable to explain the lung
expansion model. This lung motion analysis based on vector analysis and non-linear optimization shows that balloon
model centered on the center of inertia of lung is most effective geometric model to explain lung expansion by breathing.
Computerized method for measurement of displacement vectors of target positions on EPID cine images in stereotactic radiotherapy
Author(s):
Hidetaka Arimura;
Shigeo Anai;
Satoshi Yoshidome;
Katsumasa Nakamura;
Yoshiyuki Shioyama;
Satoshi Nomoto;
Hiroshi Honda;
Yoshihiko Onizuka;
Hiromi Terashima
Show Abstract
The purpose of this study was to develop a computerized method for measurement of displacement vectors of target
position on electronic portal imaging device (EPID) cine images in a treatment without implanted markers. Our
proposed method was based on a template matching technique with cross-correlation coefficient between a reference
portal (RP) image and each consecutive portal (CP) image acquired by the EPID. EPID images with 512×384 pixels
(pixel size:0.56 mm) were acquired in a cine mode at a sampling rate of 0.5 frame/sec by using an energy of 4, 6, or
10MV on linear accelerators. The displacement vector of the target on each cine image was determined from the
position in which took the maximum cross-correlation value between the RP image and each CP image. We applied our
method to EPID cine images of a lung phantom with a tumor model simulating respiratory motion, and 5 cases with a
non-small cell lung cancer and one case of metastasis. For validation of our proposed method, displacement vectors of a
target position calculated by our method were compared with those determined manually by two radiation oncologists.
As a result, for lung phantom images, target displacements by our method correlated well with those by the oncologists
(r=0.972 - 0.994). Correlation values for 4 cases ranged from 0.854 to 0.991, but the values for the other two cases
were 0.609 and 0.644. This preliminary result suggested that our method may be useful for monitoring of displacement
vectors of target positions without implanted markers in stereotactic radiotherapy.
Motion detection and pattern tracking in microscopical images using phase correlation approach
Author(s):
Evgeny Gladilin;
Constantin Kappel;
Roland Eils
Show Abstract
High-throughput live-cell imaging is one of the important tools for the investigation of cellular structure and
functions in modern experimental biology. Automatic processing of time series of microscopic images is hampered
by a number of technical and natural factors such as permanent movements of cells in the optical field, alteration
of optical cell appearance and high level of noise. Detection and compensation of global motion of groups of cells
or relocation of a single cell within a dynamical multi-cell environment is the first indispensable step in the image
analysis chain. This article presents an approach for detection of global image motion and single cell tracking in
time series of confocal laser scanning microscopy images using an extended Fourier-phase correlation technique,
which allows for analysis of non-uniform multi-body motion in partially-similar images. Our experimental results
have shown that the developed approach is capable to perform cell tracking and registration in dynamical and
noisy scenes, and provides a robust tool for fully-automatic registration of time-series of microscopic images.
Accelerated 3D image registration
Author(s):
Martin Vester-Christensen;
Søren G. Erbou;
Sune Darkner;
Rasmus Larsen
Show Abstract
Image registration is an important task in most medical imaging applications. Numerous algorithms have been
proposed and some are widely used. However, due to the vast amount of data collected by eg. a computed
tomography (CT) scanner, most registration algorithms are very slow and memory consuming. This is a huge
problem especially in atlas building, where potentially hundreds of registrations are performed. This paper
describes an approach for accelerated image registration. A grid-based warp function proposed by Cootes and
Twining, parameterized by the displacement of the grid-nodes, is used. Using a coarse-to-fine approach, the
composition of small diffeomorphic warps, results in a final diffeomorphic warp. Normally the registration is
done using a standard gradient-based optimizer, but to obtain a fast algorithm the optimization is formulated in
the inverse compositional framework proposed by Baker and Matthews. By switching the roles of the target and
the input volume, the Jacobian and the Hessian can be pre-calculated resulting in a very efficient optimization
algorithm. By exploiting the local nature of the grid-based warp, the storage requirements of the Jacobian and
the Hessian can be minimized. Furthermore, it is shown that additional constraints on the registration, such
as the location of markers, are easily embedded in the optimization. The method is applied on volumes built
from CT-scans of pig-carcasses, and results show a two-fold increase in speed using the inverse compositional
approach versus the traditional gradient-based method.
Local mismatch location and spatial scale detection in image registration
Author(s):
R. Narayanan;
J. A. Fessler;
B. Ma;
H. Park;
C. R. Meyer
Show Abstract
Image registration is now a well understood problem and several techniques using a combination of cost functions,
transformation models and optimizers have been reported in medical imaging literature. Parametric methods
often rely on the efficient placement of control points in the images, that is, depending on the location and scale
at which images are mismatched. Poor choice of parameterization results in deformations not being modeled
accurately or over parameterization, where control points may lie in homogeneous regions with low sensitivity to
cost. This lowers computational efficiency due to the high complexity of the search space and might also provide
transformations that are not physically meaningful, and possibly folded.
Adaptive methods that parameterize based on mismatch in images have been proposed. In such methods, the
cost measure must be normalized, heuristics such as how many points to pick, resolution of the grids, choosing
gradient thresholds and when to refine scale would have to be ascertained in addition to the limitation of working
only at a few discrete scales.
In this paper we identify mismatch by searching the entire image and a wide range of smooth spatial scales.
The mismatch vector, containing location and scale of mismatch is computed from peaks in the local joint
entropy. Results show that this method can be used to quickly and effectively locate mismatched regions in
images where control points can be placed in preference to other regions speeding up registration.
Mahalanobis distance based iterative closest point
Author(s):
Mads Fogtmann Hansen;
Morten Rufus Blas;
Rasmus Larsen
Show Abstract
This paper proposes an extension to the standard iterative closest point method (ICP). In contrast to ICP,
our approach (ICP-M) uses the Mahalanobis distance to align a set of shapes thus assigning an anisotropic
independent Gaussian noise to each point in the reference shape.
The paper introduces the notion of a mahalanobis distance map upon a point set with associated covariance
matrices which in addition to providing correlation weighted distance implicitly provides a method for assigning
correspondence during alignment. This distance map provides an easy formulation of the ICP problem that
permits a fast optimization.
Initially, the covariance matrices are set to the identity matrix, and all shapes are aligned to a randomly selected
shape (equivalent to standard ICP). From this point the algorithm iterates between the steps: (a) obtain mean
shape and new estimates of the covariance matrices from the aligned shapes, (b) align shapes to the mean shape.
Three different methods for estimating the mean shape with associated covariance matrices are explored in the
paper.
The proposed methods are validated experimentally on two separate datasets (IMM face dataset and femur-bones).
The superiority of ICP-M compared with ICP in recovering the underlying correspondences in the face
dataset is demonstrated.
Fast interactive elastic registration of 12-bit multi-spectral images with subvoxel accuracy using display hardware
Author(s):
Herke Jan Noordmans;
Rowland de Roode;
Rudolf Verdaasdonk
Show Abstract
Multi-spectral images of human tissue taken in-vivo often contain image alignment problems as patients have difficulty
in retaining their posture during the acquisition time of 20 seconds. Previously, it has been attempted to correct motion
errors with image registration software developed for MR or CT data but these algorithms have been proven to be too
slow and erroneous for practical use with multi-spectral images. A new software package has been developed which
allows the user to play a decisive role in the registration process as the user can monitor the progress of the registration
continuously and force it in the right direction when it starts to fail. The software efficiently exploits videocard hardware
to gain speed and to provide a perfect subvoxel correspondence between registration field and display. An 8 bit graphic
card was used to efficiently register and resample 12 bit images using the hardware interpolation modes present on the
graphic card. To show the feasibility of this new registration process, the software was applied in clinical practice
evaluating the dosimetry for psoriasis and KTP laser treatment. The microscopic differences between images of normal
skin and skin exposed to UV light proved that an affine registration step including zooming and slanting is critical for a
subsequent elastic match to have success. The combination of user interactive registration software with optimal
addressing the potentials of PC video card hardware greatly improves the speed of multi spectral image registration.
Feature-based pairwise retinal image registration by radial distortion correction
Author(s):
Sangyeol Lee;
Michael D. Abràmoff;
Joseph M. Reinhardt
Show Abstract
Fundus camera imaging is widely used to document disorders such as diabetic retinopathy and macular degeneration.
Multiple retinal images can be combined together through a procedure known as mosaicing to form
an image with a larger field of view. Mosaicing typically requires multiple pairwise registrations of partially
overlapped images. We describe a new method for pairwise retinal image registration. The proposed method is
unique in that the radial distortion due to image acquisition is corrected prior to the geometric transformation.
Vessel lines are detected using the Hessian operator and are used as input features to the registration. Since
the overlapping region is typically small in a retinal image pair, only a few correspondences are available, thus
limiting the applicable model to an afine transform at best. To recover the distortion due to curved-surface of
retina and lens optics, a combined approach of an afine model with a radial distortion correction is proposed.
The parameters of the image acquisition and radial distortion models are estimated during an optimization step
that uses Powell's method driven by the vessel line distance. Experimental results using 20 pairs of green channel
images acquired from three subjects with a fundus camera confirmed that the afine model with distortion
correction could register retinal image pairs to within 1.88±0.35 pixels accuracy (mean ± standard deviation)
assessed by vessel line error, which is 17% better than the afine-only approach. Because the proposed method
needs only two correspondences, it can be applied to obtain good registration accuracy even in the case of small
overlap between retinal image pairs.
Large volume reconstruction from laser scanning microscopy using micro-CT as a template for deformation compensation
Author(s):
A. Subramanian;
A. Krol;
A. H. Poddar;
R. L. Price;
R. Swarnkar;
D. H. Feiglin
Show Abstract
In biomedical research, there is an increased need for reconstruction of large soft tissue volumes (e.g. whole organs) at
the microscopic scale from images obtained using laser scanning microscopy (LSM) with fluorescent dyes targeting
selected cellular features. However, LSM allows reconstruction of volumes not exceeding a few hundred ìm in size and
most LSM procedures require physical sectioning of soft tissue resulting in tissue deformation. Micro-CT (&mgr;CT) can
provide deformation free tomographic image of the whole tissue volume before sectioning. Even though, the spatial
resolution of &mgr;CT is around 5 &mgr;m and its contrast resolution is poor, it could provide information on external and
internal interfaces of the investigated volume and therefore could be used as a template in the volume reconstruction
from a very large number of LSM images. Here we present a method for accurate 3D reconstruction of the murine heart
from large number of images obtained using confocal LSM. The volume is reconstructed in the following steps: (i)
Montage synthesis of individual LSM images to form a set of aligned optical planes within given physical section; (ii)
Image enhancement and segmentation to correct for non-uniform illumination and noise; (iii) Volume matching of a
synthesized physical section to a corresponding sub-volume of &mgr;CT; (iv) Affine registration of the physical section to
the selected &mgr;CT sub-volume. We observe correct gross alignment of the physical sections. However, many sections
still exhibit local misalignment that could be only corrected via local nonrigid registration to &mgr;CT template and we plan
to do it in the future.
Radial subsampling for fast cost function computation in intensity-based 3D image registration
Author(s):
Thomas Boettger;
Ivo Wolf;
Hans-Peter Meinzer;
Juan Carlos Celi
Show Abstract
Image registration is always a trade-off between accuracy and speed. Looking towards clinical scenarios the
time for bringing two or more images into registration should be around a few seconds only. We present a new
scheme for subsampling 3D-image data to allow for efficient computation of cost functions in intensity-based
image registration. Starting from an arbitrary center point voxels are sampled along scan lines which do radially
extend from the center point. We analyzed the characteristics of different cost functions computed on the sub-sampled
data and compared them to known cost functions with respect to local optima. Results show the cost
functions are smooth and give high peaks at the expected optima. Furthermore we investigated capture range of
cost functions computed under the new subsampling scheme. Capture range was remarkably better for the new
scheme compared to metrics using all voxels or different subsampling schemes and high registration accuracy
was achieved as well. The most important result is the improvement in terms of speed making this scheme
very interesting for clinical scenarios. We conclude using the new subsampling scheme intensity-based 3D image
registration can be performed much faster than using other approaches while maintaining high accuracy. A
variety of different extensions of the new approach is conceivable, e.g. non-regular distribution of the scan lines
or not to let the scan lines start from a center point only, but from the surface of an organ model for example.
Registration-based initialization during radiation therapy planning
Author(s):
Girish Gopalakrishnan;
Rakesh Mullick
Show Abstract
An established challenge in the field of image analysis has been the registration of images having a large initial
misalignment. For example in chemo and Radiation Therapy Planning (RTP), there is often a need to register an image
delineating a specific anatomy (usually in the surgery position) with that of a whole body image (obtained preoperatively).
In such a scenario, there is room for a large misalignment between the two images that are required to be
aligned. Large misalignments are traditionally handled in two ways: 1) Semi-automatically with a user initialization or 2)
With the help of the origin fields in the image header. The first approach is user dependant and the second method can be
used only if the two images are obtained from the same scanner with consistent origins. Our methodology extends a
typical registration framework by selecting components that are capable of searching a large parameter space without
settling on local optima. We have used an optimizer that is based on an Evolutionary Scheme along with an information
theory based similarity metric that can address these needs. The attempt in this study is to convert a large misalignment
problem to a small misalignment problem that can then be handled using application specific registration algorithms.
Further improvements along local areas can be obtained by subjecting the image to a non-rigid transformation. We have
successfully registered the following pairs of images without any user initialization: CTAC - simCT (neuro, lungs); MRPET/
CT (neuro, liver); T2-SPGR (neuro).
Optimizing bone extraction in MR images for 3D/2D gradient based registration of MR and x-ray images
Author(s):
Primož Markelj;
Dejan Tomaževič;
Franjo Pernuš;
Boštjan Likar
Show Abstract
A number of intensity and feature based methods have been proposed for 3D to 2D registration. However,
for multimodal 3D/2D registration of MR and X-ray images, only hybrid and reconstruction-based methods
were shown to be feasible. In this paper we optimize the extraction of features in the form of bone edge
gradients, which were proposed for 3D/2D registration of MR and X-ray images. The assumption behind
such multimodal registration is that the extracted gradients in 2D X-ray images match well to the corresponding
gradients extracted in 3D MR images. However, since MRI and X-rays are fundamentally different modalities, the
corresponding bone edge gradients may not appear in the same position and the the above-mentioned assumption
may thus not be valid. To test the validity of this assumption, we optimized the extraction of bone edges
in 3D MR and also in CT images for the registration to 2D X-ray images. The extracted bone edges were
systematically displaced in the direction of their gradients, i.e. in the direction of the normal to the bone
surface, and corresponding effects on the accuracy and convergence of 3D/2D registration were evaluated. The
evaluation was performed on two different sets of MR, CT and X-ray images of spine phantoms with known gold
standard, first consisting of five and the other of eight vertebrae. The results showed that a better registration
can be obtained if bone edges in MR images are optimized for each application-specific MR acquisition protocol.
Evaluating a method for automated rigid registration
Author(s):
Sune Darkner;
Martin Vester-Christensen;
Rasmus Larsen;
Rasmus R. Paulsen
Show Abstract
We evaluate a novel method for fully automated rigid registration of 2D manifolds in 3D space based on distance
maps, the Gibbs sampler and Iterated Conditional Modes (ICM). The method is tested against the ICP considered
as the gold standard for automated rigid registration. Furthermore, the influence of different norms and sampling
point densities is evaluated. The performance of the two methods has been evaluated on data consisting of 178
scanned ear impressions taken from the right ear. To quantify the difference of the two methods we calculate
the registration cost and the mean point to point distance. T-test for common mean are used to determine
the performance of the two methods (supported by a Wilcoxon signed rank test). The performance influence of
sampling density, sampling quantity, and norms is analyzed using a similar method.
Assistance to neurosurgical planning: using a fuzzy spatial graph model of the brain for locating anatomical targets in MRI
Author(s):
Alice Villéger;
Lemlih Ouchchane;
Jean-Jacques Lemaire;
Jean-Yves Boire
Show Abstract
Symptoms of neurodegenerative pathologies such as Parkinson's disease can be relieved through Deep Brain
Stimulation. This neurosurgical technique relies on high precision positioning of electrodes in specific areas of
the basal ganglia and the thalamus. These subcortical anatomical targets must be located at pre-operative stage,
from a set of MRI acquired under stereotactic conditions. In order to assist surgical planning, we designed a
semi-automated image analysis process for extracting anatomical areas of interest.
Complementary information, provided by both patient's data and expert knowledge, is represented as fuzzy
membership maps, which are then fused by means of suitable possibilistic operators in order to achieve the
segmentation of targets. More specifically, theoretical prior knowledge on brain anatomy is modelled within
a 'virtual atlas' organised as a spatial graph: a list of vertices linked by edges, where each vertex represents
an anatomical structure of interest and contains relevant information such as tissue composition, whereas each
edge represents a spatial relationship between two structures, such as their relative directions. The model is
built using heterogeneous sources of information such as qualitative descriptions from the expert, or quantitative
information from prelabelled images.
For each patient, tissue membership maps are extracted from MR data through a classification step. Prior
model and patient's data are then matched by using a research algorithm (or 'strategy') which simultaneously
computes an estimation of the location of every structures. The method was tested on 10 clinical images,
with promising results. Location and segmentation results were statistically assessed, opening perspectives for
enhancements.
A robust optimization strategy for intensity-based 2D/3D registration of knee implant models to single-plane fluoroscopy
Author(s):
J. Hermans;
P. Claes;
J. Bellemans;
D. Vandermeulen;
P. Suetens
Show Abstract
During intensity-based 2D/3D registration of 3D CAD models of knee implant components to a calibrated
single-plane fluoroscopy image, a similarity measure between the fluoroscopy and a rendering of the model onto
the image plane is maximized w.r.t. the 3D pose parameters of the model. This work focuses on a robust
strategy for finding this maximum by extending the standard Powell optimization algorithm with problem-specific
knowledge. A combination of feature-based and intensity-based methods is proposed. At each iteration
of the optimization process, feature information is used to compute an additional search direction along which the
image-based similarity measure is maximized. Hence, the advantages of intensity-based registration (accuracy)
and feature-based registration (robustness) are combined. The proposed method is compared with the standard
Powell optimization strategy, using an image-based similarity measure only, on simulated fluoroscopy images.
It is shown that the proposed method generally has higher accuracy, more robustness and a better convergence
behavior. Although introduced for the registration of 3D CAD models of knee implant components to single-plane
fluoroscopy images, the optimization strategy is easily extendible and applicable to other 2D/3D registration
applications.
Non-rigid multi-modal registration on the GPU
Author(s):
Christoph Vetter;
Christoph Guetter;
Chenyang Xu;
Rüdiger Westermann
Show Abstract
Non-rigid multi-modal registration of images/volumes is becoming increasingly necessary in many medical settings.
While efficient registration algorithms have been published, the speed of the solutions is a problem in
clinical applications. Harnessing the computational power of graphics processing unit (GPU) for general purpose
computations has become increasingly popular in order to speed up algorithms further, but the algorithms have
to be adapted to the data-parallel, streaming model of the GPU. This paper describes the implementation of
a non-rigid, multi-modal registration using mutual information and the Kullback-Leibler divergence between
observed and learned joint intensity distributions. The entire registration process is implemented on the GPU,
including a GPU-friendly computation of two-dimensional histograms using vertex texture fetches as well as an
implementation of recursive Gaussian filtering on the GPU. Since the computation is performed on the GPU,
interactive visualization of the registration process can be done without bus transfer between main memory
and video memory. This allows the user to observe the registration process and to evaluate the result more
easily. Two hybrid approaches distributing the computation between the GPU and CPU are discussed. The first
approach uses the CPU for lower resolutions and the GPU for higher resolutions, the second approach uses the
GPU to compute a first approximation to the registration that is used as starting point for registration on the
CPU using double-precision. The results of the CPU implementation are compared to the different approaches
using the GPU regarding speed as well as image quality. The GPU performs up to 5 times faster per iteration
than the CPU implementation.
Joint registration of multiple images using entropic graphs
Author(s):
Bing Ma;
Ramakrishnan Narayanan;
Hyunjin Park;
Alfred O. Hero;
Peyton H. Bland;
Charles R. Meyer
Show Abstract
Registration of medical images (intra- or multi-modality) is the first step before any analysis is performed.
The analysis includes treatment monitoring, diagnosis, volumetric measurements or classification to mention a
few. While pairwise registration, i.e., aligning a floating image to a fixed reference, is straightforward, it is not
immediately clear what cost measures could be exploited for the groupwise alignment of several images (possibly
multimodal) simultaneously. Recently however there has been increasing interest in this problem applied to atlas
construction, statistical shape modeling, or simply joint alignment of images to get a consistent correspondence
of voxels across all images based on a single cost measure.
The aim of this paper is twofold, a) propose a cost function - alpha mutual information computed using
entropic graphs that is a natural extension to Shannon mutual information for pairwise registration and b)
compare its performance with the pairwise registration of the image set. We show that this measure can be
reliably used to jointly align several images to a common reference. We also test its robustness by comparing
registration errors for the registration process repeated at varying noise levels.
In our experiments we used simulated data, applying different B-spline based geometric transformations to the
same image and adding independent filtered Gaussian noise to each image. Non-rigid registration was employed
with Thin Plate Splines(TPS) as the geometric interpolant.
Intensity-based image registration using Earth Mover's Distance
Author(s):
Christophe Chefd'hotel;
Guillaume Bousquet
Show Abstract
We introduce two image alignment measures using Earth Mover's Distance (EMD) as a metric on the space of
joint intensity distributions. Our first approach consists of computing EMD between a joint distribution and the
product of its marginals. This yields a measure of statistical dependence comparable to Mutual Information, a
criterion widely used for multimodal image registration. When a-priori knowledge is available, we also propose
to compute EMD between the observed distribution and a joint distribution estimated from pairs of pre-aligned
images. EMD is a cross-bin dissimilarity function and generally offers a generalization ability which is superior
to previously proposed metrics, such as Kullback-Leibler divergence. Computing EMD amounts to solving an
optimal mass transport problem whose solution can be very efficiently obtained using an algorithm recently
proposed by Ling and Okada.10 We performed a preliminary experimental evaluation of this approach with
real and simulated MR images. Our results show that EMD-based measures can be efficiently applied to rigid
registration tasks.
Improved elastic medical image registration using mutual information
Author(s):
Konstantin Ens;
Hanno Schumacher;
Astrid Franz;
Bernd Fischer
Show Abstract
One of the future-oriented areas of medical image processing is to develop fast and exact algorithms for image
registration. By joining multi-modal images we are able to compensate the disadvantages of one imaging modality
with the advantages of another modality. For instance, a Computed Tomography (CT) image containing the
anatomy can be combined with metabolic information of a Positron Emission Tomography (PET) image. It is
quite conceivable that a patient will not have the same position in both imaging systems. Furthermore some
regions for instance in the abdomen can vary in shape and position due to different filling of the rectum. So a
multi-modal image registration is needed to calculate a deformation field for one image in order to maximize the
similarity between the two images, described by a so-called distance measure.
In this work, we present a method to adapt a multi-modal distance measure, here mutual information (MI),
with weighting masks. These masks are used to enhance relevant image structures and suppress image regions
which otherwise would disturb the registration process. The performance of our method is tested on phantom
data and real medical images.
Gabor feature-based registration: accurate alignment without fiducial markers
Author(s):
Nestor A. Parra;
Carlos A. Parra
Show Abstract
Accurate registration of diagnosis and treatment images is a critical factor for the success of radiotherapy. This
study presents a feature-based image registration algorithm that uses a branch- and-bound method to search the
space of possible transformations, as well as a Hausdorff distance metric to evaluate their quality. This distance
is computed in the space of responses to a circular Gabor filter, in which, for each point of interest in both
reference and subject images, a vector of complex responses to different Gabor kernels is computed. Each kernel
is generated using different frequencies and variances of the Gabor function, which determines correspondent
regions in the images to be registered, by virtue of its rotation invariance characteristics. Responses to circular
Gabor filters have also been reported in literature as a successful tool for image classification; and in this
particular application we utilize them for patient positioning in cranial radiotherapy. For test purposes, we use
2D portal images acquired with an electronic portal imaging device (EPID). Our method presents EPID-EPID
registrations errors under 0.2 mm for translations and 0.05 deg for rotations (subpixel accuracy). We are using
fiducial marker registration as the ground truth for comparisons. Registration times average 2.70 seconds based
on 1400 feature points using a 1.4 GHz processor.
Method of image fusion for radiotherapy
Author(s):
Shoupu Chen;
Jay S. Schildkraut;
Lawrence A. Ray
Show Abstract
Patient setup error is one of the major causes of tumor position uncertainty in radiotherapy for extracranial targets,
which can result in a decreased radiation dose to the tumor and an increased dose to the normal tissues. Therefore, it
is a common practice to verify the patient setup accuracy by comparing portal images with a digitally reconstructed
radiograph (DRR) reference image. This paper proposes a practical method of portal image and DRR fusion for
patient setup verification. As a result of the mean intensity difference between the inside and outside of the actual
radiation region in the portal image, the image fusion in this work is fulfilled by applying an image registration
process to the contents inside or outside of the actual radiation region in the portal image and the relevant contents
that are extracted, accordingly, from the DRR image. The image fusion can also be fulfilled statistically by applying
two separate image registration processes to the inside and outside of the actual radiation regions. To segment the
image registration contents, automatic or semiautomatic region delineation schemes are employed that aim at
minimizing users' operation burden, while at the same time maximizing the use of human intelligence. To achieve
an accurate and fast delineation, this paper proposes using adaptive weight in the conventional level-set contour-finding
algorithm for the automatic delineation scheme, as well as the use of adaptive banding in the conventional
Intelligent Scissors algorithm for the semiautomatic delineation scheme.
Three-dimensional histopathology of lung cancer with multimodality image registration
Author(s):
Jessica de Ryk;
Jamie Weydert;
Gary Christensen;
Jacqueline Thiesse;
Eman Namati;
Joseph Reinhardt;
Eric Hoffman;
Geoffrey McLennan
Show Abstract
Identifying the three-dimensional content of non-small cell lung cancer tumors is a vital step in the pursuit of
understanding cancer growth, development and response to treatment. The majority of non-small cell lung cancer
tumors are histologically heterogeneous, and consist of the malignant tumor cells, necrotic tumor cells, fibroblastic
stromal tissue, and inflammation. Geometric and tissue density heterogeneity are utilized in computed tomography (CT)
representations of lung tumors for distinguishing between malignant and benign nodules. However, the correlation
between radiolographical heterogeneity and corresponding histological content has been limited. In this study, a
multimodality dataset of human lung cancer is established, enabling the direct comparison between histologically
identified tissue content and micro-CT representation. Registration of these two datasets is achieved through the
incorporation of a large scale, serial microscopy dataset. This dataset serves as the basis for the rigid and non-rigid
registrations required to align the radiological and histological data. The resulting comprehensive, three-dimensional
dataset includes radio-density, color and cellular content of a given lung tumor. Using the registered datasets, neural
network classification is applied to determine a statistical separation between cancerous and non-cancerous tumor
regions in micro-CT.
Oriented active shape models for 3D segmentation in medical images
Author(s):
Jiamin Liu;
Jayaram K. Udupa
Show Abstract
Active Shape Models (ASM) have been applied to various segmentation tasks in medical imaging, most successfully
in 2D segmentation of objects that have a fairly consistent shape. However, several difficulties arise when
extending 2D ASM to 3D: (1) difficulty in 3D labeling, (2) the requirement of a large number of training samples,
(3) the challenging problem of landmark correspondence in 3D, (4) inefficient initialization and optimization in
3D. This paper addresses the 3D segmentation problem by using a small number of effective 2D statistical models
called oriented ASM (OASM). We demonstrate that a small number of 2D OASM models, which are derived from
a chunk of a contiguous set of slices, are sufficient to capture the shape variation between slices and individual
objects. Each model can be matched rapidly to a new slice by using the OASM algorithm1. Our experiments in
segmenting breast and bone of the foot in MR images indicate the following: (1) The accuracy of segmentation
via our method is much better than that of 2DASM-based segmentation methods.2 (2) Far fewer landmarks are
required compared with thousands of landmarks needed in true 3D ASM. Therefore, far fewer training samples
are required to capture details. (3) Our method is computationally slightly more expensive than the 2D method2
owing to its 2 level dynamic programming (2LDP) algorithm.
Morphology-based three-dimensional segmentation of coronary artery tree from CTA scans
Author(s):
Diem Phuc T. Banh;
Iacovos S. Kyprianou;
Sophie Paquerault;
Kyle J. Myers
Show Abstract
We developed an algorithm based on a rule-based threshold framework to segment the coronary arteries from
angiographic computed tomography (CTA) data. Computerized segmentation of the coronary arteries is a
challenging procedure due to the presence of diverse anatomical structures surrounding the heart on cardiac
CTA data. The proposed algorithm incorporates various levels of image processing and organ information
including region, connectivity and morphology operations. It consists of three successive stages. The first stage
involves the extraction of the three-dimensional scaffold of the heart envelope. This stage is semiautomatic
requiring a reader to review the CTA scans and manually select points along the heart envelope in slices. These
points are further processed using a surface spline-fitting technique to automatically generate the heart envelope.
The second stage consists of segmenting the left heart chambers and coronary arteries using grayscale threshold,
size and connectivity criteria. This is followed by applying morphology operations to further detach the left and
right coronary arteries from the aorta. In the final stage, the 3D vessel tree is reconstructed and labeled using
an Isolated Connected Threshold technique. The algorithm was developed and tested on a patient coronary
artery CTA that was graciously shared by the Department of Radiology of the Massachusetts General Hospital.
The test showed that our method constantly segmented the vessels above 79% of the maximum gray-level and
automatically extracted 55 of the 58 coronary segments that can be seen on the CTA scan by a reader. These
results are an encouraging step toward our objective of generating high resolution models of the male and female
heart that will be subsequently used as phantoms for medical imaging system optimization studies.
Subcortical structure segmentation using probabilistic atlas priors
Author(s):
Sylvain Gouttard;
Martin Styner;
Sarang Joshi;
Rachel G. Smith;
Heather Cody Hazlett;
Guido Gerig
Show Abstract
The segmentation of the subcortical structures of the brain is required for many forms of quantitative neuroanatomic
analysis. The volumetric and shape parameters of structures such as lateral ventricles, putamen,
caudate, hippocampus, pallidus and amygdala are employed to characterize a disease or its evolution. This paper
presents a fully automatic segmentation of these structures via a non-rigid registration of a probabilistic atlas
prior and alongside a comprehensive validation.
Our approach is based on an unbiased diffeomorphic atlas with probabilistic spatial priors built from a
training set of MR images with corresponding manual segmentations. The atlas building computes an average
image along with transformation fields mapping each training case to the average image. These transformation
fields are applied to the manually segmented structures of each case in order to obtain a probabilistic map
on the atlas. When applying the atlas for automatic structural segmentation, an MR image is first intensity
inhomogeneity corrected, skull stripped and intensity calibrated to the atlas. Then the atlas image is registered
to the image using an affine followed by a deformable registration matching the gray level intensity. Finally, the
registration transformation is applied to the probabilistic maps of each structures, which are then thresholded
at 0.5 probability.
Using manual segmentations for comparison, measures of volumetric differences show high correlation with
our results. Furthermore, the dice coefficient, which quantifies the volumetric overlap, is higher than 62% for all
structures and is close to 80% for basal ganglia. The intraclass correlation coefficient computed on these same
datasets shows a good inter-method correlation of the volumetric measurements. Using a dataset of a single
patient scanned 10 times on 5 different scanners, reliability is shown with a coefficient of variance of less than 2
percents over the whole dataset. Overall, these validation and reliability studies show that our method accurately
and reliably segments almost all structures. Only the hippocampus and amygdala segmentations exhibit relative
low correlation with the manual segmentation in at least one of the validation studies, whereas they still show
appropriate dice overlap coefficients.
Fovea and vessel detection via a multi-resolution parameter transform
Author(s):
Katia Estabridis;
Rui Defigueiredo
Show Abstract
A multi-resolution, parallel approach to retinal blood vessel detection has been introduced that can also be used as a
discriminant for fovea detection. Localized adaptive thresholding and a multi-resolution, multi-window Radon
transform (RT) are utilized to detect the retinal vascular system. Multi-window parameter transforms are intrinsically
parallel and offer increased performance over conventional transforms. Large vessels are extracted in low-resolution
mode, whereas minor vessels are extracted in high-resolution mode further increasing computational efficiency. The
image is adaptively thresholded and then the multi-window RT is applied at the different resolution levels. Results from
each level are combined and morphologically processed to improve final performance.
A systematic approach has been implemented to perform fovea detection. The algorithm relies on a probabilistic
method to perform initial segmentation. The intensity image is re-mapped into probability space to detect areas with
low-probability of occurrence. Intensity and probability information are coupled to produce a binary image that
contains potential fovea candidates. The candidates are discriminated based upon their location within the blood vessel
network.
Automatic brain segmentation in rhesus monkeys
Author(s):
Martin Styner;
Rebecca Knickmeyer;
Sarang Joshi;
Christopher Coe;
Sarah J. Short;
John Gilmore
Show Abstract
Many neuroimaging studies are applied to primates as pathologies and environmental exposures can be studied
in well-controlled settings and environment. In this work, we present a framework for both the semi-automatic
creation of a rhesus monkey atlas and a fully automatic segmentation of brain tissue and lobar parcellation. We
determine the atlas from training images by iterative, joint deformable registration into an unbiased average
image. On this atlas, probabilistic tissue maps and a lobar parcellation. The atlas is then applied via affine,
followed by deformable registration. The affinely transformed atlas is employed for a joint T1/T2 based tissue
classification. The deformed atlas parcellation masks the tissue segmentations to define the parcellation. Other
regional definitions on the atlas can also straightforwardly be used as segmentation.
We successfully built average atlas images for the T1 and T2 datasets using a developmental training datasets
of 18 cases aged 16-34 months. The atlas clearly exhibits an enhanced signal-to-noise ratio compared to the
original images. The results further show that the cortical folding variability in our data is highly limited. Our
segmentation and parcellation procedure was successfully re-applied to all training images, as well as applied
to over 100 additional images. The deformable registration was able to identify corresponding cortical sulcal
borders accurately.
Even though the individual methods used in this segmentation framework have been applied before on
human data, their combination is novel, as is their adaptation and application to rhesus monkey MRI data. The
reduced variability present in the primate data results in a segmentation pipeline that exhibits high stability and
anatomical accuracy.
Segmentation and tracking of human sperm cells using spatio-temporal representation and clustering
Author(s):
Michael Berezansky;
Hayit Greenspan;
Daniel Cohen-Or;
Osnat Eitan
Show Abstract
This work proposes an algorithm for segmentation and tracking of human sperm. The algorithm analyzes video
sequences containing multiple moving sperms and produces video segmentation maps and moving objects trajectories.
Sperm trajectories analysis is widely used in computer-aided sperm analysis (CASA) systems. Several researches show
that CASA systems face a problem when dealing with the "actual" or "perceived" collisions of sperms. The proposed
algorithm reduces the probability of wrong trajectory construction related to collisions. We represent the video data
using a 4-dimensional model containing spatial and temporal coordinates and the direction of optical flow vectors. The
video sequence is divided into a succession of overlapping subsequences. The video data of each subsequence is
grouped in the feature domain using the mean shift procedure. We identify clusters corresponding to moving objects in
each subsequence. The complete trajectories are reconstructed by matching clusters that are most likely to represent that
same object in adjacent subsequences. The clusters are matched using heuristics which are based on cluster overlaps,
and by solving a specially formulated linear assignment problem. Tracking results are evaluated for different video
sequences containing different types of motions and collisions.
Unbiased vessel-diameter quantification based on the FWHM criterion
Author(s):
Henri Bouma;
Javier Oliván Bescós;
Anna Vilanova;
Frans A. Gerritsen
Show Abstract
The full-width at half-max (FWHM) criterion is often used for both manual and automatic quantification of the
vessel diameter in medical images. The FWHMcriterion is easy to understand and it can be implemented with low
computational cost. However, it is well known that the FWHM criterion can give an over- and underestimation
of the vessel diameter. In this paper, we propose a simple and original method to create an unbiased estimation
of the vessel diameter based on the FWHM criterion and we compared the robustness to noise of several edge
detectors. The quantitative results of our experiments show that the proposed method is more accurate and
precise than other (more complex) edge detectors, even for small vessels.
Bronchopulmonary segments approximation using anatomical atlas
Author(s):
Sata Busayarat;
Tatjana Zrimec
Show Abstract
Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally,
determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require
volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for
sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains
accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for
transferring the information from the atlas to a query image. Results show that the method is able to approximate the
segments on sparse HRCT data with slice gap up to 25 millimeters.
Improved CSF classification and lesion detection in MR brain images with multiple sclerosis
Author(s):
Yulian Wolff;
Shmuel Miron M.D.;
Anat Achiron M.D.;
Hayit Greenspan
Show Abstract
The study deals with the challenging task of automatic segmentation of MR brain images with multiple sclerosis lesions
(MSL). Multi-Channel data is used, including "fast fluid attenuated inversion recovery" (fast FLAIR or FF), and
statistical modeling tools are developed, in order to improve cerebrospinal fluid (CSF) classification and to detect MSL.
Two new concepts are proposed for use within an EM framework. The first concept is the integration of prior knowledge
as it relates to tissue behavior in different MRI modalities, with special attention given to the FF modality. The second
concept deals with running the algorithm on a subset of the input that is most likely to be noise- and artifact-free data.
This enables a more reliable learning of the Gaussian mixture model (GMM) parameters for brain tissue statistics. The
proposed method focuses on the problematic CSF intensity distribution, which is a key to improved overall segmentation
and lesion detection. A level-set based active contour stage is performed for lesion delineation, using gradient and shape
properties combined with previously learned region intensity statistics. In the proposed scheme there is no need for preregistration of an atlas, a common characteristic in brain segmentation schemes. Experimental results on real data are
presented.
Automatic measuring of quality criteria for heart valves
Author(s):
Alexandru Paul Condurache;
Tobias Hahn;
Ulrich G. Hofmann;
Michael Scharfschwerdt;
Martin Misfeld;
Til Aach
Show Abstract
Patients suffering from a heart valve deficiency are often treated by replacing the valve with an artificial or
biological implant. In case of biological implants, the use of porcine heart valves is common. Quality assessment
and inspection methods are mandatory to supply the patients (and also medical research) with only the best
such xenograft implants thus reducing the number of follow-up surgeries to replace worn-up valves. We describe
an approach for automatic in-vitro evaluation of prosthetic heart valves in an artificial circulation system. We
show how to measure the orifice area during a heart cycle to obtain an orifice curve. Different quality parameters
are then estimated on such curves.
A knowledge-guided active model method of skull segmentation on T1-weighted MR images
Author(s):
Zuyao Y. Shan;
Chia-Ho Hua;
Qing Ji;
Carlos Parra;
Xiaofei Ying;
Matthew J. Krasin;
Thomas E. Merchant;
Larry E. Kun;
Wilburn E. Reddick
Show Abstract
Skull is the anatomic landmark for patient set up of head radiation therapy. Skull is generally segmented from
CT images because CT provides better definition of skull than MR imaging. In the mean time, radiation therapy is
planned on MR images for soft tissue information. This study utilized a knowledge-guided active model (KAM) method
to segmented skull on MR images in order to enable radiation therapy planning with MR images as the primary
planning dataset. KAM utilized age-specific skull mesh models that segmented from CT images using a conditional
region growing algorithm. Skull models were transformed to given MR images using an affine registration algorithm
based on normalized mutual information. The transformed mesh models actively located skull boundaries by
minimizing their total energy. The preliminary validation was performed on MR and CT images from five patients. The
KAM segmented skulls were compared with those segmented from CT images. The average image similarity (kappa
index) was 0.57. The initial validation showed that it was promising to segment skulls directly on MR images using
KAM.
Automated image segmentation using support vector machines
Author(s):
Stephanie Powell;
Vincent A. Magnotta;
Nancy C. Andreasen
Show Abstract
Neurodegenerative and neurodevelopmental diseases demonstrate problems associated with brain maturation and aging.
Automated methods to delineate brain structures of interest are required to analyze large amounts of imaging data like
that being collected in several on going multi-center studies. We have previously reported on using artificial neural
networks (ANN) to define subcortical brain structures including the thalamus (0.88), caudate (0.85) and the putamen
(0.81). In this work, apriori probability information was generated using Thirion's demons registration algorithm. The
input vector consisted of apriori probability, spherical coordinates, and an iris of surrounding signal intensity values. We
have applied the support vector machine (SVM) machine learning algorithm to automatically segment subcortical and
cerebellar regions using the same input vector information. SVM architecture was derived from the ANN framework.
Training was completed using a radial-basis function kernel with gamma equal to 5.5. Training was performed using
15,000 vectors collected from 15 training images in approximately 10 minutes. The resulting support vectors were
applied to delineate 10 images not part of the training set. Relative overlap calculated for the subcortical structures was
0.87 for the thalamus, 0.84 for the caudate, 0.84 for the putamen, and 0.72 for the hippocampus. Relative overlap for the
cerebellar lobes ranged from 0.76 to 0.86. The reliability of the SVM based algorithm was similar to the inter-rater
reliability between manual raters and can be achieved without rater intervention.
Parsimonious model selection for tissue classification: a DTI study of zebrafish
Author(s):
Raisa Z. Freidlin;
Michal E. Komlosh;
Murray H. Loew;
Peter J. Basser
Show Abstract
One aim of this work is to investigate the feasibility of using a hierarchy of models to describe diffusion tensor
MRI data. Parsimonious model selection criteria are used to choose among different models of diffusion within
tissue. Second, based on this information, we assess whether we can perform simultaneous tissue segmentation
and classification. The proposed hierarchical framework used for parsimonious model selection is based on the
F-test, adapted from Snedecor.
Diffusion Magnetic Resonance Microscopy (MRM) provides near-microscopic resolution without relying on
a sample's optical transparency for image formation. Diffusion MRM is a noninvasive imaging technique for
quantitative analysis of intrinsic features of tissues. Thus, we propose using Diffusion MRM to characterize
normal tissue structure in adult zebrafish, and possibly subtle anatomical or structural differences between
normals and knockouts.
Both numerical phantoms and diffusion weighted image (DWI) data obtained from adult zebrafish are used
to test this model selection framework.
A dynamic multiple thresholding method for automated breast boundary detection in digitized mammograms
Author(s):
Yi-Ta Wu;
Chuan Zhou;
Lubomir M. Hadjiiski;
Jiazheng Shi;
Jun Wei;
Chintana Paramagul;
Berkman Sahiner;
Heang-Ping Chan
Show Abstract
We have previously developed a breast boundary detection method by using a gradient-based method to search for
the breast boundary (GBB). In this study, we developed a new dynamic multiple thresholding based breast boundary
detection system (MTBB). The initial breast boundary (MTBB-Initial) is obtained based on the analysis of multiple
thresholds on the image. The final breast boundary (MTBB-Final) is obtained based on the initial breast boundary and
the gradient information from horizontal and the vertical Sobel filtering. In this way, it is possible to accurately segment
the breast area from the background region. The accuracy of the breast boundary detection algorithm was evaluated by
comparison with an experienced radiologist's manual segmentation using three performance metrics: the Hausdorff
distance (HDist), the average minimum Euclidean distance (AMinDist), and the area overlap (AOM). It was found
that 68%, 85%, and 90% of images have HDist errors less than 6 mm for GBB, MTBB-Initial, and MTBB-Final,
respectively. Ninety-five percent, 96%, and 97% of the images have AMinDist errors less than 1.5 mm for GBB,
MTBB-Initial, and MTBB-Final, respectively. Ninety-six percent, 97%, and 99% of the images have AOM values
larger than 0.9 for GBB, MTBB-Initial, and MTBB-Final, respectively. It was found that the performance of the
proposed method was improved in comparison to our previous method.
An automatic method for fast and accurate liver segmentation in CT images using a shape detection level set method
Author(s):
Jeongjin Lee;
Namkug Kim;
Ho Lee;
Joon Beom Seo;
Hyung Jin Won;
Yong Moon Shin;
Yeong Gil Shin
Show Abstract
Automatic liver segmentation is still a challenging task due to the ambiguity of liver boundary and the complex context
of nearby organs. In this paper, we propose a faster and more accurate way of liver segmentation in CT images with an
enhanced level set method. The speed image for level-set propagation is smoothly generated by increasing number of
iterations in anisotropic diffusion filtering. This prevents the level-set propagation from stopping in front of local
minima, which prevails in liver CT images due to irregular intensity distributions of the interior liver region. The
curvature term of shape modeling level-set method captures well the shape variations of the liver along the slice. Finally,
rolling ball algorithm is applied for including enhanced vessels near the liver boundary. Our approach are tested and
compared to manual segmentation results of eight CT scans with 5mm slice distance using the average distance and
volume error. The average distance error between corresponding liver boundaries is 1.58 mm and the average volume
error is 2.2%. The average processing time for the segmentation of each slice is 5.2 seconds, which is much faster than
the conventional ones. Accurate and fast result of our method will expedite the next stage of liver volume quantification
for liver transplantations.
Multiscale fuzzy C-means image classification for multiple weighted MR images for the assessment of photodynamic therapy in mice
Author(s):
Hesheng Wang;
Denise Feyes;
John Mulvihill;
Nancy Oleinick;
Gregory MacLennan;
Baowei Fei
Show Abstract
We are investigating in vivo small animal imaging and analysis methods for the assessment of photodynamic therapy
(PDT), an emerging therapeutic modality for cancer treatment. Multiple weighted MR images were acquired from
tumor-bearing mice pre- and post-PDT and 24-hour after PDT. We developed an automatic image classification method
to differentiate live, necrotic and intermediate tissues within the treated tumor on the MR images. We used a multiscale
diffusion filter to process the MR images before classification. A multiscale fuzzy C-means (FCM) classification method
was applied along the scales. The object function of the standard FCM was modified to allow multiscale classification
processing where the result from a coarse scale is used to supervise the classification in the next scale. The multiscale
fuzzy C-means (MFCM) method takes noise levels and partial volume effects into the classification processing. The
method was validated by simulated MR images with various noise levels. For simulated data, the classification method
achieved 96.0 ± 1.1% overlap ratio. For real mouse MR images, the classification results of the treated tumors were
validated by histologic images. The overlap ratios were 85.6 ± 5.1%, 82.4 ± 7.8% and 80.5 ± 10.2% for the live, necrotic,
and intermediate tissues, respectively. The MR imaging and the MFCM classification methods may provide a useful tool
for the assessment of the tumor response to photodynamic therapy in vivo.
Fully automatic segmentation of liver from multiphase liver CT
Author(s):
Yalin Zheng;
Xiaoyun Yang;
Xujiong Ye;
Xinyu Lin
Show Abstract
Multidetector row CT, multiphase CT in particular, has been widely accepted as a sensitive imaging modality in
the detection of liver cancer. Segmentation of liver from CT images is of great importance in terms of accurate
detection of tumours, volume measurement, pre-surgical planning. The segmentation of liver, however, remains
to be an unsolved problem due to the complicated nature of liver CT such as imaging noise, similar intensity to
its adjacent structures and large variations of contrast kinetics and localised geometric features. The purpose
of this paper is to present our newly developed algorithm aiming to tackle this problem. In our method, a CT
image was first smoothed by geometric diffusion method; the smoothed image was segmented by thresholding
operators. In order to gain optimal segmentation, a novel method was developed to choose threshold values
based on both the anatomical knowledge and features of liver CT. Then morphological operators were applied
to fill the holes in the generated binary image and to disconnect the liver from other unwanted adjoining
structures. After this process, a so-called "2.5D region overlapping" filter was introduced to further remove
unwanted regions. The resulting 3D region was regarded as the final segmentation of the liver region. This
method was applied to venous phase CT data of 45 subjects (30 patient and 15 asymptomatic subjects). Our
results show good agreement with the annotations delineated manually by radiologists and the overlapping ratio
of volume is 87.7% on average and the correlation coefficient between them is 98.1%.
Segmentation of prostate biopsy needles in transrectal ultrasound images
Author(s):
Dagmar Krefting;
Barbara Haupt;
Thomas Tolxdorff;
Carsten Kempkensteffen;
Kurt Miller
Show Abstract
Prostate cancer is the most common cancer in men. Tissue extraction at different locations (biopsy) is the
gold-standard for diagnosis of prostate cancer. These biopsies are commonly guided by transrectal ultrasound
imaging (TRUS). Exact location of the extracted tissue within the gland is desired for more specific diagnosis
and provides better therapy planning. While the orientation and the position of the needle within clinical TRUS
image are limited, the appearing length and visibility of the needle varies strongly. Marker lines are present and
tissue inhomogeneities and deflection artefacts may appear. Simple intensity, gradient oder edge-detecting based
segmentation methods fail. Therefore a multivariate statistical classificator is implemented. The independent
feature model is built by supervised learning using a set of manually segmented needles. The feature space is
spanned by common binary object features as size and eccentricity as well as imaging-system dependent features
like distance and orientation relative to the marker line. The object extraction is done by multi-step binarization
of the region of interest. The ROI is automatically determined at the beginning of the segmentation and marker
lines are removed from the images. The segmentation itself is realized by scale-invariant classification using
maximum likelihood estimation and Mahalanobis distance as discriminator. The technique presented here could
be successfully applied in 94% of 1835 TRUS images from 30 tissue extractions. It provides a robust method for
biopsy needle localization in clinical prostate biopsy TRUS images.
Improved livewire method for segmentation on low contrast and noisy images
Author(s):
David Chen;
Jianhua Yao
Show Abstract
Fully automatic segmentation on medical images often generates unreliable results so we must rely on semi-automatic
methods that use both user input and boundary refinement to produce a more accurate result. In this paper, we present an
improved livewire method for noisy regions of interest with low contrast boundaries. The first improvement is the
adaptive search space, which minimizes the required search area for graph generation, and a directional graph searching
which also speeds up the shortest path finding. The second improvement is an enhanced cost function to consider only
the local maximum gradient within our search area, which prevents interference from objects we are not interested in.
The third improvement is the on-the-fly training based on gradient histogram to prevent attraction of the contour to
strong edges that are not part of the actual contour. We carried out tests between the original and our improved version
of livewire. The segmentation was validated on phantom images and also against manual segmentation defined by
experts on uterine leiomyomas MRI. Our results show that, on average, our method reduces the time to completion by
96% with improved accuracy up to 63%.
Level sets on non-planar manifolds for ridge detection on isosurfaces
Author(s):
Sovira Tan;
Jianhua Yao;
Michael M. Ward M.D.;
Lawrence Yao M.D.;
Ronald M. Summers M.D.
Show Abstract
We describe an algorithm for evolving a level set on a non-planar manifold like the isosurface of a 3D object. The
surface is represented by a triangular mesh and the feature that guides our level set via a speed function is its curvature.
We overcome the difficulty of computing the gradient and curvature of the level set distance function on a non-planar,
non-Cartesian mesh by performing the calculations locally in neighborhoods small enough to be considered planar.
Moreover we use a least squares estimation of derivatives to replace finite differences and achieve better accuracy. The
algorithm was motivated by our need to detect the ridge lines of vertebral bodies. The advantage of using level sets is
that they are capable of producing a continuous ridge line despite noise and gaps in the ridge. We tested our algorithm on
40 vertebral bodies (80 ridge lines). 76 ridge lines showed no noticeable mistakes. The same set of parameters was used.
For the remaining 4, we had to change the parameters of the speed function sigmoid to correct small under- or over-segmenting.
To further test our algorithm we designed a synthetic surface with large curvature and to which we added
noise and a ridge. The level set was able to evolve on the surface and stop at the ridge. Tests on synthetic cylinders with
a ground truth ridge and to which we added noise indicate that the level set has good accuracy.
Segmentation of complex objects with non-spherical topologies from volumetric medical images using 3D livewire
Author(s):
Kelvin Poon;
Ghassan Hamarneh;
Rafeef Abugharbieh
Show Abstract
Segmentation of 3D data is one of the most challenging tasks in medical image analysis. While reliable automatic
methods are typically preferred, their success is often hindered by poor image quality and significant
variations in anatomy. Recent years have thus seen an increasing interest in the development of semi-automated
segmentation methods that combine computational tools with intuitive, minimal user interaction. In an earlier
work, we introduced a highly-automated technique for medical image segmentation, where a 3D extension of the
traditional 2D Livewire was proposed. In this paper, we present an enhanced and more powerful 3D Livewire-based
segmentation approach with new features designed to primarily enable the handling of complex object
topologies that are common in biological structures. The point ordering algorithm we proposed earlier, which
automatically pairs up seedpoints in 3D, is improved in this work such that multiple sets of points are allowed
to simultaneously exist. Point sets can now be automatically merged and split to accommodate for the presence
of concavities, protrusions, and non-spherical topologies. The robustness of the method is further improved by
extending the 'turtle algorithm', presented earlier, by using a turtle-path pruning step. Tests on both synthetic
and real medical images demonstrate the efficiency, reproducibility, accuracy, and robustness of the proposed
approach. Among the examples illustrated is the segmentation of the left and right ventricles from a T1-weighted
MRI scan, where an average task time reduction of 84.7% was achieved when compared to a user performing 2D
Livewire segmentation on every slice.
Automated localization of periventricular and subcortical white matter lesions
Author(s):
Fedde van der Lijn;
Meike W. Vernooij;
M. Arfan Ikram;
Henri A. Vrooman;
Daniel Rueckert;
Alexander Hammers;
Monique M. B. Breteler;
Wiro J. Niessen
Show Abstract
It is still unclear whether periventricular and subcortical white matter lesions (WMLs) differ in etiology or clinical
consequences. Studies addressing this issue would benefit from automated segmentation and localization
of WMLs. Several papers have been published on WML segmentation in MR images. Automated localization
however, has not been investigated as much. This work presents and evaluates a novel method to label segmented
WMLs as periventricular and subcortical.
The proposed technique combines tissue classification and registration-based segmentation to outline the ventricles
in MRI brain data. The segmented lesions can then be labeled into periventricular WMLs and subcortical
WMLs by applying region growing and morphological operations.
The technique was tested on scans of 20 elderly subjects in which neuro-anatomy experts manually segmented
WMLs. Localization accuracy was evaluated by comparing the results of the automated method with a manual
localization. Similarity indices and volumetric intraclass correlations between the automated and the manual
localization were 0.89 and 0.95 for periventricular WMLs and 0.64 and 0.89 for subcortical WMLs, respectively.
We conclude that this automated method for WML localization performs well to excellent in comparison to the
gold standard.
Improvements in level set segmentation of 3D small animal imagery
Author(s):
Jeffery R Price;
Deniz Aykac;
Jonathan Wall
Show Abstract
In this paper, we investigate several improvements to region-based level set algorithms in the context of segmenting
x-ray CT data from pre-clinical imaging of small animal models. We incorporate a recently introduced
signed distance preserving term into a region-based level set model and provide formulas for a semi-implicit
finite difference implementation. We illustrate some pitfalls of topology preserving level sets and introduce the
concept of connectivity preservation as a potential alternative. We illustrate the benefits of these improvements
on phantom and real data.
Model-based segmentation and quantification of fluorescent bacteria in 3D microscopy live cell images
Author(s):
Stefan Wörz;
Constantin Kappel;
Roland Eils;
Karl Rohr
Show Abstract
We introduce a new model-based approach for segmenting and quantifying fluorescent bacteria in 3D microscopy
live cell images. The approach is based on a new 3D superellipsoidal parametric intensity model, which is directly
fitted to the image intensities within 3D regions-of-interest. Based on the fitting results, we can directly compute
the total amount of intensity (fluorescence) of each cell. In addition, we introduce a method for automatic
initialization of the model parameters, and we propose a method for simultaneously fitting clustered cells by
using a superposition of 3D superellipsoids for model fitting. We demonstrate the applicability of our approach
based on 3D synthetic and real 3D microscopy images.
Segmentation of liver region with tumorous tissues
Author(s):
Xuejun Zhang;
Gobert Lee;
Tetsuji Tajima;
Teruhiko Kitagawa;
Masayuki Kanematsu;
Xiangrong Zhou;
Takeshi Hara;
Hiroshi Fujita;
Ryujiro Yokoyama;
Hiroshi Kondo;
Hiroaki Hoshi;
Shigeru Nawano;
Kenji Shinozaki
Show Abstract
Segmentation of an abnormal liver region based on CT or MR images is a crucial step in surgical planning. However,
precisely carrying out this step remains a challenge due to either connectivities of the liver to other organs or the shape,
internal texture, and homogeneity of liver that maybe extensively affected in case of liver diseases. Here, we propose a
non-density based method for extracting the liver region containing tumor tissues by edge detection processing. False
extracted regions are eliminated by a shape analysis method and thresholding processing. If the multi-phased images are
available then the overall outcome of segmentation can be improved by subtracting two phase images, and the
connectivities can be further eliminated by referring to the intensity on another phase image. Within an edge liver map,
tumor candidates are identified by their different gray values relative to the liver. After elimination of the small and nonspherical
over-extracted regions, the final liver region integrates the tumor region with the liver tissue. In our experiment,
40 cases of MDCT images were used and the result showed that our fully automatic method for the segmentation of liver
region is effective and robust despite the presence of hepatic tumors within the liver.
Semi-automatic parcellation of the corpus striatum
Author(s):
Ramsey Al-Hakim;
Delphine Nain;
James Levitt;
Martha Shenton;
Allen Tannenbaum
Show Abstract
The striatum is the input component of the basal ganglia from the cerebral cortex. It includes the caudate, putamen,
and nucleus accumbens. Thus, the striatum is an important component in limbic frontal-subcortical circuitry and is
believed to be relevant both for reward-guided behaviors and for the expression of psychosis. The dorsal striatum is
composed of the caudate and putamen, both of which are further subdivided into pre- and post-commissural components.
The ventral striatum (VS) is primarily composed of the nucleus accumbens. The striatum can be functionally divided
into three broad regions: 1) a limbic; 2) a cognitive and 3) a sensor-motor region. The approximate corresponding
anatomic subregions for these 3 functional regions are: 1) the VS; 2) the pre/post-commissural caudate and the pre-commissural
putamen and 3) the post-commissural putamen.
We believe assessing these subregions, separately, in disorders with limbic and cognitive impairment such as
schizophrenia may yield more informative group differences in comparison with normal controls than prior parcellation
strategies of the striatum such as assessing the caudate and putamen. The manual parcellation of the striatum into these
subregions is currently defined using certain landmark points and geometric rules. Since identification of these areas is
important to clinical research, a reliable and fast parcellation technique is required.
Currently, only full manual parcellation using editing software is available; however, this technique is extremely
time intensive. Previous work has shown successful application of heuristic rules into a semi-automatic platform1. We
present here a semi-automatic algorithm which implements the rules currently used for manual parcellation of the
striatum, but requires minimal user input and significantly reduces the time required for parcellation.
Edge-directed inference for microaneurysms detection in digital fundus images
Author(s):
Ke Huang;
Michelle Yan;
Selin Aviyente
Show Abstract
Microaneurysms (MAs) detection is a critical step in diabetic retinopathy screening, since MAs are the earliest
visible warning of potential future problems. A variety of algorithms have been proposed for MAs detection
in mass screening. Different methods have been proposed for MAs detection. The core technology for most of
existing methods is based on a directional mathematical morphological operation called "Top-Hat" filter that
requires multiple filtering operations at each pixel. Background structure, uneven illumination and noise often
cause confusion between MAs and some non-MA structures and limits the applicability of the filter. In this paper,
a novel detection framework based on edge directed inference is proposed for MAs detection. The candidate MA
regions are first delineated from the edge map of a fundus image. Features measuring shape, brightness and
contrast are extracted for each candidate MA region to better exclude false detection from true MAs. Algorithmic
analysis and empirical evaluation reveal that the proposed edge directed inference outperforms the "Top-Hat"
based algorithm in both detection accuracy and computational speed.
Comparative evaluation of a novel 3D segmentation algorithm on in-treatment radiotherapy cone beam CT images
Author(s):
Gareth Price;
Chris Moore
Show Abstract
Image segmentation and delineation is at the heart of modern radiotherapy, where the aim is to deliver as high a radiation
dose as possible to a cancerous target whilst sparing the surrounding healthy tissues. This, of course, requires that a
radiation oncologist dictates both where the tumour and any nearby critical organs are located. As well as in treatment
planning, delineation is of vital importance in image guided radiotherapy (IGRT): organ motion studies demand that
features across image databases are accurately segmented, whilst if on-line adaptive IGRT is to become a reality, speedy
and correct target identification is a necessity.
Recently, much work has been put into the development of automatic and semi-automatic segmentation tools, often
using prior knowledge to constrain some grey level, or derivative thereof, interrogation algorithm. It is hoped that such
techniques can be applied to organ at risk and tumour segmentation in radiotherapy.
In this work, however, we make the assumption that grey levels do not necessarily determine a tumour's extent,
especially in CT where the attenuation coefficient can often vary little between cancerous and normal tissue. In this
context we present an algorithm that generates a discontinuity free delineation surface driven by user placed, evidence
based support points. In regions of sparse user supplied information, prior knowledge, in the form of a statistical shape
model, provides guidance.
A small case study is used to illustrate the method. Multiple observers (between 3 and 7) used both the presented tool
and a commercial manual contouring package to delineate the bladder on a serially imaged (10 cone beam CT volumes )
prostate patient. A previously presented shape analysis technique is used to quantitatively compare the observer
variability.
Toward automated detection and segmentation of aortic calcifications from radiographs
Author(s):
François Lauze;
Marleen de Bruijne
Show Abstract
This paper aims at automatically measuring the extent of calcified plaques in the lumbar aorta from standard
radiographs. Calcifications in the abdominal aorta are an important predictor for future cardiovascular morbidity
and mortality. Accurate and reproducible measurement of the amount of calcified deposit in the aorta is therefore
of great value in disease diagnosis and prognosis, treatment planning, and the study of drug effects. We propose
a two-step approach in which first the calcifications are detected by an iterative statistical pixel classification
scheme combined with aorta shape model optimization. Subsequently, the detected calcified pixels are used as the
initialization for an inpainting based segmentation. We present results on synthetic images from the inpainting
based segmentation as well as results on several X-ray images based on the two-steps approach.
Automated segmentation of hepatic vessel trees in non-contrast x-ray CT images
Author(s):
Suguru Kawajiri;
Xiangrong Zhou;
Xuejin Zhang;
Takeshi Hara;
Hiroshi Fujita;
Ryujiro Yokoyama;
Hiroshi Kondo;
Masayuki Kanematsu;
Hiroaki Hoshi
Show Abstract
Hepatic vessel trees are the key structures in the liver. Knowledge of the hepatic vessel trees is important for liver surgery
planning and hepatic disease diagnosis such as portal hypertension. However, hepatic vessels cannot be easily distinguished
from other liver tissues in non-contrast CT images. Automated segmentation of hepatic vessels in non-contrast CT images
is a challenging issue. In this paper, an approach for automated segmentation of hepatic vessels trees in non-contrast X-ray
CT images is proposed. Enhancement of hepatic vessels is performed using two techniques: (1) histogram transformation
based on a Gaussian window function; (2) multi-scale line filtering based on eigenvalues of Hessian matrix. After the
enhancement of hepatic vessels, candidate of hepatic vessels are extracted by thresholding. Small connected regions of
size less than 100 voxels are considered as false-positives and are removed from the process. This approach is applied to
20 cases of non-contrast CT images. Hepatic vessel trees segmented from the contrast-enhanced CT images of the same
patient are used as the ground truth in evaluating the performance of the proposed segmentation method. Results show that
the proposed method can enhance and segment the hepatic vessel regions in non-contrast CT images correctly.
WHIPPET: a collaborative software environment for medical image processing and analysis
Author(s):
Yangqiu Hu;
David R. Haynor;
Kenneth R. Maravilla
Show Abstract
While there are many publicly available software packages for medical image processing, making them available to end
users in clinical and research labs remains non-trivial. An even more challenging task is to mix these packages to form
pipelines that meet specific needs seamlessly, because each piece of software usually has its own input/output formats,
parameter sets, and so on. To address these issues, we are building WHIPPET (Washington Heterogeneous Image
Processing Pipeline EnvironmenT), a collaborative platform for integrating image analysis tools from different sources.
The central idea is to develop a set of Python scripts which glue the different packages together and make it possible to
connect them in processing pipelines. To achieve this, an analysis is carried out for each candidate package for
WHIPPET, describing input/output formats, parameters, ROI description methods, scripting and extensibility and
classifying its compatibility with other WHIPPET components as image file level, scripting level, function extension
level, or source code level. We then identify components that can be connected in a pipeline directly via image format
conversion. We set up a TWiki server for web-based collaboration so that component analysis and task request can be
performed online, as well as project tracking, knowledge base management, and technical support. Currently WHIPPET
includes the FSL, MIPAV, FreeSurfer, BrainSuite, Measure, DTIQuery, and 3D Slicer software packages, and is
expanding. Users have identified several needed task modules and we report on their implementation.
Automatic measurement of oblique-oriented airway dimension at volumetric CT: effect of imaging parameters and obliquity of airway with FWHM method using a physical phantom
Author(s):
Namkug Kim;
Joon Beom Seo;
Koun Sik Song M.D.;
Suk-Ho Kang
Show Abstract
This study is conducted to assess the influence of various CT imaging parameters and airway obliquity,
such as reconstruction kernel, field of view, slice thickness, and obliquity of airway on automatic
measurement of airway wall thickness with FWHM method and physical phantom. The phantom, consists
of 11 poly-acryl tubes with various inner lumen diameters and thickness, was used in this study. The
measured density of the wall was 150HU. The airspace outside of tube was filled with poly-urethane
foam, whose density was -900HU, which is similar density of emphysema region. CT images, obtained
with MDCT (Sensation 16, Siemens), was reconstructed with various reconstruction kernel (B10f, B30f,
B50f, B70f and B80f), different field of views (180mm, 270mm, 360mm), and different thicknesses (0.75,
1, and 2 mm). The phantom was scanned at various oblique angles (0, 30, 45, 60 degree). Using in-house
airway measurement software, central axis of oblique airway was determined by 3D thinning algorithm
and CT image perpendicular to the axis was reconstructed. The luminal area, outer boundary, and wall
thickness was measured by FWHM method at each image. Actual dimension of each tube and measured
CT values on each CT data set was compared. Sharper reconstruction kernel, thicker image thickness, and
larger oblique angle of airway axis results in decrease of measured wall thickness. There was internal
interaction between imaging parameters and obliquity of airway on the accuracy of measurement. There
was a threshold point of 1-mm wall thickness, below which the measurement failed to represent the
change of real thickness. Even using the smaller FOV, the accuracy was not improved. Usage of standard kernel (B50f) and 0.75mm thickness results in the most accurate measurement results, which is
independent of obliquity of airway. (Mean error: 0 Degree 0.067±0.05mm, 30 Degree 0.076±0.09, 45
Degree 0.074±0.09, 60 Degree 0.091±0.09). In this imaging parameters, there was no significant
difference (paired t-test : p > 0.05) between actual measurement and each oblique angle measurement.
The accuracy of airway wall measurement was strongly influenced by imaging parameters and obliquity
of airway. For the accurate measurement, independent of obliquity, we recommend the CT images
reconstructed with 0.75mm slice thickness and B50f or B30f with sharpening filter.
Automatic multiple threshold scheme for segmentation of tomograms
Author(s):
K. J. Batenburg;
J. Sijbers
Show Abstract
Tomographic reconstructions, which are generally gray-scale images, are often segmented as to extract quantitative
information, such as the shape or volume of image objects. At present, segmentation is usually performed by
thresholding. However, the process of threshold selection is somewhat arbitrary and requires human interaction.
In this paper, we present an algorithmic approach for automatically selecting the segmentation thresholds by
using the available tomographic projection data. Assuming that each material (i.e., tissue type) in the sample
has a characteristic, approximately constant gray value, thresholds are computed for which the segmented image
corresponds optimally with the projection data.
Lesion detection using Gabor-based saliency field mapping
Author(s):
Marc Macenko;
Rutao Luo;
Mehmet Celenk;
Limin Ma;
Qiang Zhou
Show Abstract
In this paper, we present a method that detects lesions in two-dimensional (2D) cross-sectional brain images. By
calculating the major and minor axes of the brain, we calculate an estimate of the background, without any a
priori information, to use in inverse filtering. Shape saliency computed by a Gabor filter bank is used to further
refine the results of the inverse filtering. The proposed algorithm was tested on different images of "The Whole
Brain Atlas" database. The experimental results have produced 93% classification accuracy in processing 100
arbitrary images, representing different kinds of brain lesion.
Evaluation of internal carotid artery segmentation by InsightSNAP
Author(s):
Emily L. Spangler;
Christopher Brown;
John A. Roberts;
Brian E. Chapman
Show Abstract
Quantification of cervical carotid geometry may facilitate improved clinical decision making and scientific discovery.
We set out to evaluate the ability of InsightSNAP (ITK-SNAP), an open-source segmentation program for 3D medical
images (http://www.itksnap.org, version 1.4), to semi-automatically segment internal carotid arteries. A sample of five
individuals (three normal volunteers, and two diseased patients) were imaged with an MR exam consisting of a MOTSA
TOF MRA image volume and multiple black blood images acquired with different contrast weightings. Comparisons
were made to a manual segmentation created during simultaneous evaluation of the MOTSA image and the various
black blood images (typically PD-weighted, T1-weighted, and T2-weighted). These individuals were selected as a
training set to determine acceptable parameters for ITK-SNAP's semi-automatic level sets segmentation method. The
conclusion from this training set was that the initial thresholding (assigning probabilities to the intensities of image
pixels) in the image pre-processing step was most important to obtaining an acceptable segmentation. Unfortunately no
consistent trends emerged in how this threshold should be chosen. Figures of percent over- and under-segmentation
were computed as a means of comparing the hand segmented and semi-automatically segmented internal carotids.
Overall the under-segmentation by ITK-SNAP (voxels included in the manual segmentation but not in the semiautomated
segmentation) was 10.94% ± 6.35% while the over-segmentation (voxels excluded in the manual
segmentation but included in the semi-automated segmentation) was 8.16% ± 4.40% defined by reference to the total
number of voxels included in the manual segmentation.
A method for smoothing segmented lung boundary in chest CT images
Author(s):
Yeny Yim;
Helen Hong
Show Abstract
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of
pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often
excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT
images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary
is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice.
We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally,
to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied
within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual
inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was
considerably increased by compensating for pulmonary vessels and pleural nodules.
A morphing active surface model for automatic re-contouring in 4D radiotherapy
Author(s):
Xiao Han;
Lyndon S. Hibbard;
Scott Brame
Show Abstract
Delineation of tumor and organs at risk on each phase of 4D CT images is an essential step in adaptive radiotherapy
planning. Manual contouring of the large amount of data is time-consuming and impractical. (Semi-) automated methods
typically rely on deformable image registration techniques to automatically map the manual contours drawn in one
image to all the other phases in order to get complete 4D contouring, a procedure known as automatic re-contouring.
Disadvantages of such approaches are that the manual contouring information is not used in the registration process and
the whole volume registration is highly inefficient. In this work, we formulate the automatic re-contouring in a
deformable surface model framework, which effectively restricts the computation to a lower dimensional space. The
proposed framework was inspired by the morphing active contour model proposed by Bertalmio et al. [1], but we
address some limitations of the original method. First, a surface-based regularization is introduced to improve robustness
with respect to noise. Second, we design a multi-resolution approach to further improve computational efficiency and to
account for large deformations. Third, discrete meshes are used to represent the surface model instead of the implicit
level set framework for better computational speed and simpler implementation. Experiment results show that the new
morphing active surface model method performs as accurately as a volume registration based re-contouring method but
is nearly an order of magnitude faster. The new formulation also allows easy combination of registration and
segmentation techniques for further improvement in accuracy and robustness.
Fully automatic lesion boundary detection in ultrasound breast images
Author(s):
M. H. Yap;
E. A. Edirisinghe;
H. E. Bez
Show Abstract
We propose a novel approach to fully automatic lesion boundary detection in ultrasound breast images. The novelty of
the proposed work lies in the complete automation of the manual process of initial Region-of-Interest (ROI) labeling and
in the procedure adopted for the subsequent lesion boundary detection. Histogram equalization is initially used to pre-process
the images followed by hybrid filtering and multifractal analysis stages. Subsequently, a single valued
thresholding segmentation stage and a rule-based approach is used for the identification of the lesion ROI and the point
of interest that is used as the seed-point. Next, starting from this point an Isotropic Gaussian function is applied on the
inverted, original ultrasound image. The lesion area is then separated from the background by a thresholding
segmentation stage and the initial boundary is detected via edge detection. Finally to further improve and refine the
initial boundary, we make use of a state-of-the-art active contour method (i.e. gradient vector flow (GVF) snake model).
We provide results that include judgments from expert radiologists on 360 ultrasound images proving that the final
boundary detected by the proposed method is highly accurate. We compare the proposed method with two existing state-of-
the-art methods, namely the radial gradient index filtering (RGI) technique of Drukker et. al. and the local mean
technique proposed by Yap et. al., in proving the proposed method's robustness and accuracy.
Fully automatic estimation of object pose for segmentation initialization: application to cardiac MR and echocardiography images
Author(s):
Meng Ma;
Johan G. Bosch;
Johan H. C. Reiber;
Boudewijn P. F. Lelieveldt
Show Abstract
Automatic image segmentation techniques are essential for medical image interpretation and analysis. Though
numerous methods on image segmentation have been reported, the quality of a segmentation often heavily relies on the
positioning of an accurate initial contour. In this paper, a novel solution is presented for the automated object detection
in medical image data. A shape- and intensity template is generated from a training set, and both the search image and
the template are mapped into a log-polar domain, where rotation and scale are represented by a translation. Orientation
and scale of the object are estimated by determining maximum normalized correlation using a Symmetric Phase Only
Matched Filter (SPOMF) with a peak enhancement filter. The detected orientation and scale are subsequently applied to
the template, and a second pass of the SPOMF using the transformed template yields the actual position of the object in
the search image. Performance tests were carried out on two imaging modalities: a set of cardiac MRI images from 34
patients and 2D echocardiograms from 100 patients.
Automatic corpus callosum segmentation for standardized MR brain scanning
Author(s):
Qing Xu;
Hong Chen;
Li Zhang;
Carol L. Novak
Show Abstract
Magnetic Resonance (MR) brain scanning is often planned manually with the goal of aligning the imaging plane with
key anatomic landmarks. The planning is time-consuming and subject to inter- and intra- operator variability. An
automatic and standardized planning of brain scans is highly useful for clinical applications, and for maximum utility
should work on patients of all ages. In this study, we propose a method for fully automatic planning that utilizes the
landmarks from two orthogonal images to define the geometry of the third scanning plane. The corpus callosum (CC) is
segmented in sagittal images by an active shape model (ASM), and the result is further improved by weighting the
boundary movement with confidence scores and incorporating region based refinement. Based on the extracted contour
of the CC, several important landmarks are located and then combined with landmarks from the coronal or transverse
plane to define the geometry of the third plane. Our automatic method is tested on 54 MR images from 24 patients and 3
healthy volunteers, with ages ranging from 4 months to 70 years old. The average accuracy with respect to two
manually labeled points on the CC is 3.54 mm and 4.19 mm, and differed by an average of 2.48 degrees from the
orientation of the line connecting them, demonstrating that our method is sufficiently accurate for clinical use.
Segmentation of magnetic resonance images of the thighs for a new National Institutes of Health initiative
Author(s):
A. Monzon;
P. F. Hemler;
M. Nalls;
T. Manini;
B. C. Clark;
T. B. Harris M.D.;
M. J. McAuliffe
Show Abstract
This paper describes a new system for semi-automatically segmenting the background, subcutaneous fat, interstitial fat,
muscle, bone, and bone marrow from magnetic resonance images (MRI's) of volunteers for a new osteoarthritis study.
Our system first creates separate right and left thigh images from a single MR image containing both legs. The
subcutaneous fat boundary is very difficult to detect in these images and is therefore interactively defined with a single
boundary. The volume within the boundary is then automatically processed with a series of clustering and
morphological operations designed to identify and classify the different tissue types required for this study. Once the
tissues have been identified, the volume of each tissue is determined and a single, false colored, segmented image
results. We quantitatively compare the segmentation in three different ways. In our first method we simply compare
the tissue volumes of the resulting segmentations performed independently on both the left and right thigh. A second
quantification method compares our results temporally with three image sets of the same volunteer made one month
apart including a month of leg disuse. Our final quantification methodology compares the volumes of different tissues
detected with our system to the results of a manual segmentation performed by a trained expert. The segmented image
results of four different volunteers using images acquired at three different times suggests that the system described in
this paper provides more consistent results than the manually segmented set. Furthermore, measurements of the left and
right thigh and temporal results for both segmentation methods follow the anticipated trend of increasing fat and
decreasing muscle over the period of disuse.
CxCxC: compressed connected components labeling algorithm
Author(s):
Nithin Nagaraj;
Shekhar Dwivedi
Show Abstract
We propose Compressed Connected Components (CxCxC), a new fast algorithm for labeling connected components in binary images making use of compression. We break the given 3D image into non-overlapping 2x2x2 cube of voxels (2x2 square of pixels for 2D) and encode these binary values as the bits of a single decimal integer.
We perform the connected component labeling on the resulting compressed data set. A recursive labeling approach by the use of smart-masks on the encoded decimal values is performed. The output is finally decompressed back to the original size by decimal-to-binary conversion of the cubes to retrieve the connected components in a lossless fashion. We demonstrate the efficacy of such encoding and labeling for large data sets (up to 1392 x 1040 for 2D and
512 x 512 x 336 for 3D). CxCxC reports a speed gain of 4x for 2D and 12x for 3D with memory savings of 75% for 2D and 88% for 3D over conventional (recursive growing of component labels) connected components algorithm.
We also compare our method with those of VTK and ITK and find that we outperform both with speed gains of 3x and 6x for 3D. These features make CxCxC highly suitable for medical imaging and multi-media applications where the size of data sets and the number of connected components can be very large.
Automatic 4D segmentation of the left ventricle in cardiac-CT-data
Author(s):
Dominik Fritz;
Julia Kroll;
Rüdiger Dillmann;
Michael Scheuering
Show Abstract
The manual segmentation and analysis of 4D high resolution multi slice cardiac CT datasets is both labor
intensive and time consuming. Therefore, it is necessary to supply the cardiologist with powerful software tools,
to segment the myocardium and the cardiac cavities in all cardiac phases and to compute the relevant diagnostic
parameters.
In recent years there have been several publications concerning the segmentation and analysis of the left
ventricle (LV) and myocardium for a single phase or for the diagnostically most relevant phases, the enddiastole
(ED) and the endsystole (ES). However, for a complete diagnosis and especially of wall motion abnormalities, it
is necessary to analyze not only the motion endpoints ED and ES, but also all phases in-between.
In this paper a novel approach for the 4D segmentation of the left ventricle in cardiac-CT-data is presented.
The segmentation of the 4D data is divided into a first part, which segments the motion endpoints of the cardiac
cycle ED and ES and a second part, which segments all phases in-between. The first part is based on a bi-temporal
statistical shape model of the left ventricle. The second part uses a novel approach based on the
individual volume curve for the interpolation between ED and ES and afterwards an active contour algorithm
for the final segmentation.
The volume curve based interpolation step allows the constraint of the subsequent segmentation of the phases
between ED and ES to very small search-intervals, hence makes the segmentation process faster and more robust.
Automated segmentation of mammary gland regions in non-contrast torso CT images based on probabilistic atlas
Author(s):
X. Zhou;
M. Kan;
T. Hara;
H. Fujita;
K. Sugisaki;
R. Yokoyama;
G. Lee;
H. Hoshi
Show Abstract
The identification of mammary gland regions is a necessary processing step during the anatomical structure
recognition of human body and can be expected to provide the useful information for breast tumor diagnosis. This paper
proposes a fully-automated scheme for segmenting the mammary gland regions in non-contrast torso CT images. This
scheme calculates the probability for each voxel belonging to the mammary gland or other regions (for example
pectoralis major muscles) in CT images and decides the mammary gland regions automatically. The probability is
estimated from the location of the mammary gland and pectoralis major muscles in CT images. The location (named as a
probabilistic atlas) is investigated from the pre-segmentation results in a number of different CT scans and the CT
number distribution is approximated using a Gaussian function. We applied this scheme to 66 patient cases (female, age:
40-80) and evaluated the accuracy by using the coincidence rate between the segmented result and gold standard that is
generated manually by a radiologist for each CT case. The mean value of the coincidence rate was 0.82 with the standard
deviation of 0.09 for 66 CT cases.
Three dimensional analysis of the vascular perfusion for anterolateral thigh perforator flaps
Author(s):
Jiaxing Xue;
Jean Gao;
Gary Arbique;
Michel Saint-Cyr;
Dan Hatef;
Spencer Brown
Show Abstract
Quantitative analysis of three dimentional (3D) blood flow direction and location will benefit and guide the surgical
thinning and dissection process. Toward this goal, this study was performed to reconstruct 3D vascular trees with the
incorporation of temporal information from contrast-agent propagation. A computational technique based on our
previous work to segment the 3D vascular tree structure from the CT scan volume image sets was proposed. This
technique utilizes the deformation method which is a moving grid methodology and which in tradition is used to improve
the computational accuracy and efficiency in solving differential equations. Compared with our previous work, we
extended the moving grid deformation method to 3D and incorporated 3D region growing method for an initial
segmentation. At last, a 3D divergence operator was applied to delineate vascular tree structures from the 3D grid
volume plot. Experimental results show the 3D nature of the vascular structure and four-dimensional (4D) vascular tree
evolving process. The proposed computational framework demonstrates its effectiveness and improvement in the
modeling of 3D vascular tree.
Signaling local non-credibility in an automatic segmentation pipeline
Author(s):
Joshua H. Levy;
Robert E. Broadhurst;
Surajit Ray;
Edward L. Chaney;
Stephen M. Pizer
Show Abstract
The advancing technology for automatic segmentation of medical images should be accompanied by techniques
to inform the user of the local credibility of results. To the extent that this technology produces clinically
acceptable segmentations for a significant fraction of cases, there is a risk that the clinician will assume every
result is acceptable. In the less frequent case where segmentation fails, we are concerned that unless the user is
alerted by the computer, she would still put the result to clinical use. By alerting the user to the location of a
likely segmentation failure, we allow her to apply limited validation and editing resources where they are most
needed.
We propose an automated method to signal suspected non-credible regions of the segmentation, triggered by
statistical outliers of the local image match function. We apply this test to m-rep segmentations of the bladder
and prostate in CT images using a local image match computed by PCA on regional intensity quantile functions.
We validate these results by correlating the non-credible regions with regions that have surface distance
greater than 5.5mm to a reference segmentation for the bladder. A 6mm surface distance was used to validate
the prostate results. Varying the outlier threshold level produced a receiver operating characteristic with area
under the curve of 0.89 for the bladder and 0.92 for the prostate. Based on this preliminary result, our method has
been able to predict local segmentation failures and shows potential for validation in an automatic segmentation
pipeline.
Application of 3D geometric tensors for segmenting cylindrical tree structures from volumetric datasets
Author(s):
Walter F. Good;
Xiao Hui Wang;
Carl Fuhrman;
Jules H. Sumkin M.D.;
Glenn S. Maitz;
Joseph K. Leader;
Cynthia Britton M.D.;
David Gur
Show Abstract
Many diagnostic problems involve the assessment of vascular structures or bronchial trees depicted in volumetric
datasets, but previous algorithms for segmenting cylindrical structures are not sufficiently robust for them to be widely
applied clinically. Local geometric information that is of importance in segmentation consists of voxel values and their
first and second derivatives. First derivatives can be generalized to the gradient and more generally the structure tensor,
while the second derivatives can be represented by Hessian matrices. It is desirable to exploit both kinds of information,
at the same time, in any voxel classification process, but few segmentation algorithms have attempted to do this. This
project compares segmentation based on the structure tensor to that based on the Hessian matrix, and attempts to
determine whether some combination of the two can demonstrate better performance than either individually. To
compare performance in a situation where a gold standard exists, the methods were tested on simulated tree structures.
We generated 3D tree structures with varying amounts of added noise, and processed them with algorithms based on the
structure tensor, the Hessian matrix, and a combination of the two. We applied an orientation-sensitive filter to smooth
the tensor fields. The results suggest that the structure tensor by itself is more effective in detecting cylindrical structures
than the Hessian tensor, and the combined tensor is better than either of the other tensors.
Automated definition of mid-sagittal planes for MRI brain scans
Author(s):
Hong Chen;
Qing Xu;
Li Zhang;
Atilla P. Kiraly;
Carol L. Novak
Show Abstract
In most magnetic resonance imaging (MRI) clinical examinations, the orientation and position of diagnostic scans are
manually defined by MRI operators. To accelerate the workflow, algorithms have been proposed to automate the
definition of the MRI scanning planes. A mid-sagittal plane (MSP), which separates the two cerebral hemispheres, is
commonly used to align MRI neurological scans, since it standardizes the visualization of important anatomy. We
propose an algorithm to define the MSP automatically based on lines separating the cerebral hemispheres in 2D coronal
and transverse images. Challenges to the automatic definition of separation lines are disturbances from the inclusion of
the shoulder, and the asymmetry of the brain. The proposed algorithm first detects the position of the head by fitting an
ellipse that maximizes the image gradient magnitude in the boundary region of the ellipse. A symmetrical axis is then
established which minimizes the difference between the image on either side of the axis. The pixels at the space between
the hemispheres are located in the adjacent area of the symmetrical axis, and a linear regression with robust weights
defines a line that best separates the two hemispheres. The geometry of MSP is calculated based on the separation lines
in the coronal and transverse views. Experiments on 100 images indicate that the result of the proposed algorithm is
consistent with the results obtained by domain experts and is significantly faster.
Simulated 3D ultrasound LV cardiac images for active shape model training
Author(s):
Constantine Butakoff;
Simone Balocco;
Sebastian Ordas
Show Abstract
In this paper a study of 3D ultrasound cardiac segmentation using Active Shape Models (ASM) is presented.
The proposed approach is based on a combination of a point distribution model constructed from a multitude of
high resolution MRI scans and the appearance model obtained from simulated 3D ultrasound images. Usually
the appearance model is learnt from a set of landmarked images. The significant level of noise, the low resolution
of 3D ultrasound images (3D US) and the frequent failure to capture the complete wall of the left ventricle (LV)
makes automatic or manual landmarking difficult. One possible solution is to use artificially simulated 3D US
images since the generated images will match exactly the shape in question. In this way, by varying simulation
parameters and generating corresponding images, it is possible to obtain a training set where the image matches
the shape exactly. In this work the simulation of ultrasound images is performed by a convolutional approach.
The evaluation of segmentation accuracy is performed on both simulated and in vivo images. The results obtained
on 567 simulated images had an average error of 1.9 mm (1.73 ± 0.05 mm for epicardium and 2 ± 0.07 mm for
endocardium, with 95% confidence) with voxel size being 1.1 × 1.1 × 0.7 mm. The error on 20 in vivo data was
3.5 mm (3.44 ± 0.4 mm for epicardium and 3.73 ± 0.4 mm for endocardium). In most images the model was
able to approximate the borders of myocardium even when the latter was indistinguishable from the surrounding
tissues.
Fast and accurate border detection in dermoscopy images using statistical region merging
Author(s):
M. Emre Celebi;
Hassan A. Kingravi;
Hitoshi Iyatomi;
JeongKyu Lee;
Y. Alp Aslandogan;
William Van Stoecker;
Randy Moss;
Joseph M. Malters;
Ashfaq A. Marghoob
Show Abstract
As a result of advances in skin imaging technology and the development of suitable image processing
techniques during the last decade, there has been a significant increase of interest in the computer-aided
diagnosis of melanoma. Automated border detection is one of the most important steps in this procedure,
since the accuracy of the subsequent steps crucially depends on it. In this paper, a fast and unsupervised
approach to border detection in dermoscopy images of pigmented skin lesions based on the Statistical
Region Merging algorithm is presented. The method is tested on a set of 90 dermoscopy images. The
border detection error is quantified by a metric in which a set of dermatologist-determined borders is
used as the ground-truth. The proposed method is compared to six state-of-the-art automated methods
(optimized histogram thresholding, orientation-sensitive fuzzy c-means, gradient vector flow snakes,
dermatologist-like tumor extraction algorithm, meanshift clustering, and the modified JSEG method)
and borders determined by a second dermatologist. The results demonstrate that the presented method
achieves both fast and accurate border detection in dermoscopy images.
Fuzzy shape-based interpolation
Author(s):
Punam Kumar Saha;
Ying Zhuge;
Jayaram K. Udupa
Show Abstract
Image interpolation is an essential task in medical imaging because of limited and varying spatial and temporal
resolution in imaging devices; also, it is a necessary step while rotating an image for different purposes. In the literature,
interpolation techniques have been divided into two major groups: image-based and object-based. Shape-based
interpolation is a commonly used object-based method and it works on binary images. In this paper, we propose fuzzy
shape-based interpolation by using fuzzy distance transform theory that is applicable to fuzzy object representations.
The method essentially works in three steps as follows. Step 1: Separately compute the fuzzy distance transform (FDT)
of two successive slices. Step 2: Compute the FDT of the target slice by interpolating the FDT values of original slices.
Step 3: Compute the fuzzy object representation on the target slice by applying an inverse FDT (iFDT). Fuzzy shapebased
interpolation solves a fundamental problem of shape-based interpolation. Specifically, the new method requires
no binarization and it accurately handles the fuzziness of objects using fuzzy distance transform. A new theory and
algorithm for iFDT are proposed here to compute the original fuzzy membership map form its fuzzy distance transform
map. The idea of iFDT is essential for directly applying the idea of shape-based interpolation to a fuzzy object
representation. The method is being tested on clinical data and compared with binary shape-based interpolation.
Evaluation of Brownian warps for shape alignment
Author(s):
Mads Nielsen
Show Abstract
Many methods are used for warping images to non-rigidly register
shapes and objects in between medical images in inter- and
intra-patient studies. In landmark-based registration linear methods
like thin-plate- or b-splines are often used. These linear methods
suffer from a number of theoretical deficiencies: they may break or
tear apart the shapes, they are not source-destination symmetric, and
may not be invertible. Theoretically more satisfactory models using
diffeomorphic approaches like "Large Deformations" and "Brownian
warps" have earlier proved (in theory and practice) to remove these
deficiencies. In this paper we show that the maximum-likelihood
Brownian Warps also generalize better in the case of matching
fractured vertebrae to normal vertebrae. X-rays of 10 fractured and 1
normal vertebrae have been annotated by a trained radiologist by 6
so-called height points used for fracture scoring, and by the full
boundary. The fractured vertebrae have been registered to the normal
vertebra using only the 6 height points as landmarks. After
registration the Hausdorff distance between the boundaries is
measured. The registrations based on Brownian warps show a
significantly lower distance to the original boundary.
Statistical group differences in anatomical shape analysis using Hotelling T2 metric
Author(s):
Martin Styner;
Ipek Oguz;
Shun Xu;
Dimitrios Pantazis;
Guido Gerig
Show Abstract
Shape analysis has become of increasing interest to the neuroimaging community due to its potential to precisely
locate morphological changes between healthy and pathological structures. This manuscript presents a
comprehensive set of tools for the computation of 3D structural statistical shape analysis. It has been applied
in several studies on brain morphometry, but can potentially be employed in other 3D shape problems. Its main
limitations is the necessity of spherical topology.
The input of the proposed shape analysis is a set of binary segmentation of a single brain structure, such
as the hippocampus or caudate. These segmentations are converted into a corresponding spherical harmonic
description (SPHARM), which is then sampled into a triangulated surfaces (SPHARM-PDM). After alignment,
differences between groups of surfaces are computed using the Hotelling T2 two sample metric. Statistical p-values,
both raw and corrected for multiple comparisons, result in significance maps. Additional visualization
of the group tests are provided via mean difference magnitude and vector maps, as well as maps of the group
covariance information.
The correction for multiple comparisons is performed via two separate methods that each have a distinct
view of the problem. The first one aims to control the family-wise error rate (FWER) or false-positives via the
extrema histogram of non-parametric permutations. The second method controls the false discovery rate and
results in a less conservative estimate of the false-negatives.
Prior versions of this shape analysis framework have been applied already to clinical studies on hippocampus
and lateral ventricle shape in adult schizophrenics. The novelty of this submission is the use of the Hotelling T2
two-sample group difference metric for the computation of a template free statistical shape analysis. Template
free group testing allowed this framework to become independent of any template choice, as well as it improved
the sensitivity of our method considerably. In addition to our existing correction methodology for the multiple
comparison problem using non-parametric permutation tests, we have extended the testing framework to include
False Discovery Rate (FDR). FDR provides a significance correction with higher sensitivity while allowing a
expected minimal amount of false-positives compared to our prior testing scheme.
A simplified motion model for estimating respiratory motion from orbiting views
Author(s):
Rongping Zeng;
Jeffrey A. Fessler;
James M. Balter
Show Abstract
We have shown previously that the internal motion caused by a patient's breathing can be estimated from a
sequence of slowly rotating 2D cone-beam X-ray projection views and a static prior of of the patient's anatomy.1, 2
The estimator iteratively updates a parametric 3D motion model so that the modeled projection views of the
deformed reference volume best match the measured projection views. Complicated motion models with many
degrees of freedom may better describe the real motion, but the optimizations assiciated with them may overfit
noise and may be easily trapped by local minima due to a large number of parameters. For the latter problem, we
believe it can be solved by offering the optimization algorithm a good starting point within the valley containing
the global minimum point. Therefore, we propose to start the motion estimation with a simplified motion
model, in which we assume the displacement of each voxel at any time is proportional to the full movement of
that voxel from extreme exhale to extreme inhale. We first obtain the full motion by registering two breath-hold
CT volumes at end-expiration and end-inspiration. We then estimate a sequence of scalar displacement
proportionality parameters. Thus the goal simplifies to finding a motion amplitude signal. This estimation
problem can be solved quickly using the exhale reference volume and projection views with coarse (downsampled)
resolution, while still providing acceptable estimation accuracy. The estimated simple motion then can be used
to initialize a more complicated motion estimator.
Continuous criterion for parallel MRI reconstruction using B-spline approximation (PROBER)
Author(s):
Jan Petr;
Jan Kybic
Show Abstract
Parallel MRI is a way to use multiple receiver coils with distinct spatial sensitivities to increase the speed of
the MRI acquisition. The acquisition is speeded up by undersampling in the phase-encoding direction and the
resulting data loss and aliasing is compensated for by the use of the additional information obtained from several
receiver coils.
The task is to reconstruct an unaliased image from a series of aliased images. We have proposed an algorithm
called PROBER that takes advantage of the smoothness of the reconstruction transformation in space. B-spline
functions are used to approximate the reconstruction transformation. Their coefficients are estimated at once
minimizing the total expected reconstruction error. This makes the reconstruction less sensitive to noise in the
reference images and areas without signal in the image. We show that this approach outperforms the SENSE
and GRAPPA reconstruction methods for certain coil configurations.
In this article, we propose another improvement, consisting of a continuous representation of the B-splines to
evaluate the error instead of the discretely sampled version. This solves the undersampling issues in the discrete
B-spline representation and offers higher reconstruction quality which has been confirmed by experiments. The
method is compared with the discrete version of PROBER and with commercially used algorithms GRAPPA
and SENSE in terms of artifact suppression and reconstruction SNR.
Consistent realignment of 3D diffusion tensor MRI eigenvectors
Author(s):
Mirza Faisal Beg;
Ryan Dickie;
Gregory Golds;
Laurent Younes
Show Abstract
Diffusion tensor MR image data gives at each voxel in the image a symmetric, positive definite matrix that is
denoted as the diffusion tensor at that voxel location. The eigenvectors of the tensor represent the principal
directions of anisotopy in water diffusion. The eigenvector with the largest eigenvalue indicates the local orientation
of tissue fibers in 3D as water is expected to diffuse preferentially up and down along the fiber tracts.
Although there is no anatomically valid positive or negative direction to these fiber tracts, for many applications,
it is of interest to assign an artificial direction to the fiber tract by choosing one of the two signs of the principal
eigenvector in such a way that in local neighborhoods the assigned directions are consistent and vary smoothly
in space.
We demonstrate here an algorithm for realigning the principal eigenvectors by flipping their sign such that it
assigns a locally consistent and spatially smooth fiber direction to the eigenvector field based on a Monte-Carlo
algorithm adapted from updating clusters of spin systems. We present results that show the success of this
algorithm on 11 available unsegmented canine cardiac volumes of both healthy and failing hearts.
Partial volume correction of magnetic resonance spectroscopic imaging
Author(s):
Yao Lu;
Dee Wu;
Vincent A. Magnotta
Show Abstract
The ability to study the biochemical composition of the brain is becoming important to better understand
neurodegenerative and neurodevelopmental disorders. Magnetic Resonance Spectroscopy (MRS) can non-invasively
provide quantification of brain metabolites in localized regions. The reliability of MRS is limited in part due to partial
volume artifacts. This results from the relatively large voxels that are required to acquire sufficient signal-to-noise ratios
for the studies. Partial volume artifacts result when a MRS voxel contains a mixture of tissue types. Concentrations of
metabolites vary from tissue to tissue. When a voxel contains a heterogeneous tissue composition, the spectroscopic
signal acquired from this voxel will consist of the signal from different tissues making reliable measurements difficult.
We have developed a novel tool for the estimation of partial volume tissue composition within MRS voxels thus
allowing for the correction of partial volume artifacts. In addition, the tool can localize MR spectra to anatomical regions
of interest. The tool uses tissue classification information acquired as part of a structural MR scan for the same subject.
The tissue classification information is co-registered with the spectroscopic data. The user can quantify the partial
volume composition of each voxel and use this information as covariates for metabolite concentrations.
An accurate tongue tissue strain synthesis using pseudo-wavelet reconstruction-based tagline detection
Author(s):
Xiaohui Yuan;
Cengizhan Ozturk;
Gloria Chi-Fishman
Show Abstract
This paper describe our work on tagline detection and tissue strain synthesis. The tagline detection method
extends our previous work16 using pseudo-wavelet reconstruction. The novelty in tagline detection is that we
integrated an active contour model and successfully improved the detection and indexing performance. Using
pseudo-wavelet reconstruction-based method, prominent wavelet coefficients were retained while others were
eliminated. Taglines were then extracted from the reconstructed images using thresholding. Due to noise and
artifacts, a tagline can be broken into segments. We employed an active contour model that tracks the most
likely segments and bridges them. Experiments demonstrated that our method extracts taglines automatically
with greater robustness. Tissue strain was also reconstructed using extracted taglines.
Characterization of pulmonary nodules on computer tomography (CT) scans: the effect of additive white noise on features selection and classification performance
Author(s):
Teresa Osicka;
Matthew T. Freedman M.D.;
Farid Ahmed
Show Abstract
The goal of this project is to use computer analysis to classify small lung nodules, identified on CT, into likely benign
and likely malignant categories. We compared discrete wavelet transforms (DWT) based features and a modification of
classical features used and reported by others. To determine the best combination of features for classification, several
intensities of white noise were added to the original images to determine the effect of such noise on classification
accuracy. Two different approaches were used to determine the effect of noise: in the first method the best features for
classification of nodules on the original image were retained as noise was added. In the second approach, we
recalculated the results to reselect the best classification features for each particular level of added noise. The CT images
are from the National Lung Screening Trial (NLST) of the National Cancer Institute (NCI). For this study, nodules were
extracted in window frames of three sizes. Malignant nodules were cytologically or histogically diagnosed, while benign
had two-year follow-up. A linear discriminant analysis with Fisher criterion (FLDA) approach was used for feature
selection and classification, and decision matrix for matched sample to compare the classification accuracy. The initial
features mode revealed sensitivity to both the amount of noise and the size of window frame. The recalculated feature
mode proved more robust to noise with no change in terms of classification accuracy. This indicates that the best
features for computer classification of lung nodules will differ with noise, and, therefore, with exposure.
A machine learning approach for interactive lesion segmentation
Author(s):
Yuanzhong Li;
Shoji Hara;
Wataru Ito;
Kazuo Shimura
Show Abstract
In this paper, we propose a novel machine learning approach for interactive lesion segmentation on CT and MRI images.
Our approach consists of training process and segmenting process. In training process, we train AdaBoosted histogram
classifiers to classify true boundary positions and false ones on the 1-D intensity profiles of lesion regions. In segmenting
process, given a marker indicating a rough location of a lesion, the proposed solution segments its region automatically
by using the trained AdaBoosted histogram classifiers. If there are imperfects in the segmented result, based on one
correct location designated by the user, the solution does the segmentation again and gives a new satisfied result. There
are two novelties in our approach. The first is that we use AdaBoost in the training process to learn diverse intensity
distributions of lesion regions, and utilize the trained classifiers successfully in segmenting process. The second is that
we present a reliable and user-friendly way in segmenting process to rectify the segmented result interactively. Dynamic
programming is used to find a new optimal path. Experimental results show our approach can segment lesion regions
successfully, despite the diverse intensity distributions of the lesion regions, marker location variability and lesion region
shape variability. Our framework is also generic and can be applied for blob-like target segmentation with diverse
intensity distributions in other applications.
Blood vessel classification into arteries and veins in retinal images
Author(s):
Claudia Kondermann;
Daniel Kondermann;
Michelle Yan
Show Abstract
The prevalence of diabetes is expected to increase dramatically in coming years; already today it accounts for
a major proportion of the health care budget in many countries. Diabetic Retinopathy (DR), a micro vascular
complication very often seen in diabetes patients, is the most common cause of visual loss in working age population
of developed countries today. Since the possibility of slowing or even stopping the progress of this disease
depends on the early detection of DR, an automatic analysis of fundus images would be of great help to the
ophthalmologist due to the small size of the symptoms and the large number of patients. An important symptom
for DR are abnormally wide veins leading to an unusually low ratio of the average diameter of arteries to veins
(AVR). There are also other diseases like high blood pressure or diseases of the pancreas with one symptom being
an abnormal AVR value. To determine it, a classification of vessels as arteries or veins is indispensable. As to our
knowledge despite the importance there have only been two approaches to vessel classification yet. Therefore we
propose an improved method. We compare two feature extraction methods and two classification methods based
on support vector machines and neural networks. Given a hand-segmentation of vessels our approach achieves
95.32% correctly classified vessel pixels. This value decreases by 10% on average, if the result of a segmentation
algorithm is used as basis for the classification.
Structural quantification of cartilage changes using statistical parametric mapping
Author(s):
José Gerardo Tamez-Peña;
Monica Barbu-McInnis;
Saara Totterman
Show Abstract
The early detection of Osteoarthritis (OA) treatment efficacy requires monitoring of small changes in cartilage
morphology. Current approaches rely in carefully monitoring global cartilage parameters. However, they are not very
sensitive to the detection of focal morphological changes in cartilage structure. This work presents the use of the
statistical parametric mapping (SPM) for the detection and quantification of changes in cartilage morphology. The SPM
is computed by first registering the baseline and the follow-up three dimensional (3D) reconstructions of the cartilage
tissue. Once the registration is complete, the thickness changes for every cartilage point is computed which is followed
by a model based estimation of the variance of thickness error. The cartilage thickness change and the variance
estimations are used to compute the z-score map. The map is used to visualize and quantify significant changes in
cartilage thickness. The z-map quantification provides the area of significant changes, the associated volume of changes
as well as the average thickness of cartilage loss. Furthermore, thickness change distributions functions are normalized
to provide the probability distribution functions (PDF). The PDF can be used to understand and quantify the differences
among different treatment groups. The performance of the approach on simulated data and real subject data will be
presented.
Performance comparison of classifiers for differentiation among obstructive lung diseases based on features of texture analysis at HRCT
Author(s):
Youngjoo Lee;
Joon Beom Seo;
Bokyoung Kang;
Dongil Kim;
June Goo Lee;
Song Soo Kim;
Namkug Kim;
Suk Ho Kang
Show Abstract
The performance of classification algorithms for differentiating among obstructive lung diseases based on features from
texture analysis using HRCT (High Resolution Computerized Tomography) images was compared. HRCT can provide
accurate information for the detection of various obstructive lung diseases, including centrilobular emphysema,
panlobular emphysema and bronchiolitis obliterans. Features on HRCT images can be subtle, however, particularly in
the early stages of disease, and image-based diagnosis is subject to inter-observer variation. To automate the diagnosis
and improve the accuracy, we compared three types of automated classification systems, naïve Bayesian classifier,
ANN (Artificial Neural Net) and SVM (Support Vector Machine), based on their ability to differentiate among normal
lung and three types of obstructive lung diseases. To assess the performance and cross-validation of these three
classifiers, 5 folding methods with 5 randomly chosen groups were used. For a more robust result, each validation was
repeated 100 times. SVM showed the best performance, with 86.5% overall sensitivity, significantly different from the
other classifiers (one way ANOVA, p<0.01). We address the characteristics of each classifier affecting performance and
the issue of which classifier is the most suitable for clinical applications, and propose an appropriate method to choose
the best classifier and determine its optimal parameters for optimal disease discrimination. These results can be applied
to classifiers for differentiation of other diseases.
Organ analysis and classification using principal component and linear discriminant analysis
Author(s):
William H. Horsthemke;
Daniela S. Raicu
Show Abstract
Texture analysis and classification of soft tissues in Computed Tomography (CT) images
recently advanced with a new approach that disambiguates the checkboard problem
where two distinctly different patterns produce identical co-occurrence matrices, but this
method quadruples the size of the feature space. The feature space size problem is
exacerbated by the use of varying sized texture operators for improving boundary
segmentation. Dimensionality reduction motivates this investigation into systematic
analysis of the power of feature categories (Haralick descriptors, distance, and direction)
to differentiate between soft tissues.
The within-organ variance explained by the individual components of feature categories
offers a ranking of their potential power for between-organ discrimination. This paper
introduces a technique for combining the Principal Component Analysis (PCA) results to
compare and visualize the explanatory power of features with varying window sizes. We
found that 1) the two Haralick features Cluster Tendency and Contrast contribute the
most; 2) as distance increases, its contribution to overall variance decreases; and 3)
direction is unimportant.
We also evaluated the proposed technique with respect to its classification power. Linear
Discriminant Analysis (LDA) and Decision Tree (DT) were used to produce two
classification models based on the reduced data set. We found that using PCA either fails
to improve or markedly degrades the classification performance of LDA as well as of the
DT model. Though feature extraction for classification shows no promise, the proposed
technique offers a systematic mechanism to compare feature reduction strategies for
varying window sizes as well as other measurement techniques.
Analyzing µCT images of bone specimen with wavelets and scaling indices: Which texture measure does better to depict the trabecular bone structure?
Author(s):
Christoph W. Raeth;
Jan Bauer;
Dirk Mueller;
Ernst J. Rummeny;
Thomas M. Link;
Sharmila Majumdar;
Felix Eckstein;
Roberto Monetti
Show Abstract
The visualisation and subsequent quantification of the inner bone structure plays an important role for better
understanding the disease- or drug-induced changes of the bone in the context of osteoporosis.
Scaling indices (SIM) are well suited to quantify these structures on a local level, especially to discriminate between
plate-like and rod-like structural elements. Local filters based on wavelets (WVL) are a standard technique in texture
analysis. So far, they are mainly used for two-dimensional image data sets.
Here we extend the formalism of the spherical Mexican hat wavelets to the analysis of three-dimensional tomographic
images and evaluate its performance in comparison with scaling indices, histomorphometric measures and BMD.
&mgr;CT images with isotropic resolution of 30 x 30 x 30 &mgr;m of a sample of 19 trabecular bone specimen of human thoracic
vertebrae were acquired. In addition, the bone mineral density was measured by QCT. The maximum compressive
strength (MCS) was determined in a biomechanical test.
Some wavelet-based as well as all scaling index- based texture measures show a significantly higher correlation with
MCS (WVL: &rgr;2=0.54, SIM: &rgr;2=0.53-0.56) than BMD (&rgr;2=0.46), where we find slightly better correlations for SIM than
for WVL. The SIM and WVL results are comparable but not better to those obtained with histomorphometric measures
(BV/TV: &rgr;2=0.45, Tr. N.: &rgr;2=0.67, Tr.Sp.: &rgr;2=0.67).
In conclusion, WVL and SIM techniques can successfully be applied to &mgr;CT image data. Since the two measures
characterize the image structures on a local scale, they offer the possibility to directly identify and discriminate rods and
sheets of the trabecular structure. This property may give new insights about the bone constituents responsible for the
mechanical strength.
Network patterns recognition for automatic dermatologic images classification
Author(s):
Costantino Grana;
Vanini Daniele M.D.;
Giovanni Pellacani M.D.;
Stefania Seidenari M.D.;
Rita Cucchiara
Show Abstract
In this paper we focus on the problem of automatic classification of melanocytic lesions, aiming at identifying the
presence of reticular patterns. The recognition of reticular lesions is an important step in the description of the
pigmented network, in order to obtain meaningful diagnostic information. Parameters like color, size or symmetry
could benefit from the knowledge of having a reticular or non-reticular lesion. The detection of network patterns is
performed with a three-steps procedure. The first step is the localization of line points, by means of the line points
detection algorithm, firstly described by Steger. The second step is the linking of such points into a line considering
the direction of the line at its endpoints and the number of line points connected to these. Finally a third step
discards the meshes which couldn't be closed at the end of the linking procedure and the ones characterized by
anomalous values of area or circularity. The number of the valid meshes left and their area with respect to the whole
area of the lesion are the inputs of a discriminant function which classifies the lesions into reticular and non-reticular.
This approach was tested on two balanced (both sets are formed by 50 reticular and 50 non-reticular
images) training and testing sets. We obtained above 86% correct classification of the reticular and non-reticular
lesions on real skin images, with a specificity value never lower than 92%.
Orientation-weighted local Minkowski functionals in 3D for quantitative assessment of trabecular bone structure in the hip
Author(s):
H. F. Boehm;
H. Bitterling;
C. Weber;
V. Kuhn;
F. Eckstein;
M. Reiser
Show Abstract
Fragility fractures or pathologic fractures of the hip, i.e. fractures with no apparent trauma, represent the worst
complication in osteoporosis with a mortality close to 25% during the first post-traumatic year. Over 90% of hip
fractures result from falls from standing height. A substantial number of femoral fractures are initiated in the femoral
neck or the trochanteric regions which contain an internal architecture of trabeculae that are functionally highly
specialized to withstand the complex pattern of external and internal forces associated with human gait.
Prediction of the mechanical strength of bone tissue can be achieved by dedicated texture analysis of data obtained by
high resolution imaging modalities, e.g. computed tomography (CT) or magnetic resonance tomography (MRI). Since
in the case of the proximal femur, the connectivity, regional distribution and - most of all - the preferred orientation of
individual trabeculae change considerably within narrow spatial limits, it seems most reasonable to evaluate the femoral
bone structure on an orientation-weighted, local scale.
In past studies, we could demonstrate the advantages of topological analysis of bone structure using the Minkowski
Functionals in 3D on a global and on a local scale.
The current study was designed to test the hypothesis that the prediction of the mechanical competence of the proximal
femur by a new algorithm considering orientational changes of topological properties in the trabecular architecture is
feasible and better suited than conventional methods based on the measurement of the mineral density of bone tissue
(BMD).
A comparison of texture models for automatic liver segmentation
Author(s):
Mailan Pham;
Ruchaneewan Susomboon;
Tim Disney;
Daniela Raicu;
Jacob Furst
Show Abstract
Automatic liver segmentation from abdominal computed tomography (CT) images based on gray levels or shape alone is
difficult because of the overlap in gray-level ranges and the variation in position and shape of the soft tissues. To address
these issues, we propose an automatic liver segmentation method that utilizes low-level features based on texture
information; this texture information is expected to be homogenous and consistent across multiple slices for the same
organ. Our proposed approach consists of the following steps: first, we perform pixel-level texture extraction; second, we
generate liver probability images using a binary classification approach; third, we apply a split-and-merge algorithm to
detect the seed set with the highest probability area; and fourth, we apply to the seed set a region growing algorithm
iteratively to refine the liver's boundary and get the final segmentation results. Furthermore, we compare the
segmentation results from three different texture extraction methods (Co-occurrence Matrices, Gabor filters, and Markov
Random Fields (MRF)) to find the texture method that generates the best liver segmentation. From our experimental
results, we found that the co-occurrence model led to the best segmentation, while the Gabor model led to the worst liver
segmentation. Moreover, co-occurrence texture features alone produced approximately the same segmentation results as
those produced when all the texture features from the combined co-occurrence, Gabor, and MRF models were used.
Therefore, in addition to providing an automatic model for liver segmentation, we also conclude that Haralick cooccurrence
texture features are the most significant texture characteristics in distinguishing the liver tissue in CT scans.
The performance improvement of automatic classification among obstructive lung diseases on the basis of the features of shape analysis, in addition to texture analysis at HRCT
Author(s):
Youngjoo Lee;
Namkug Kim;
Joon Beom Seo;
JuneGoo Lee;
Suk Ho Kang
Show Abstract
In this paper, we proposed novel shape features to improve classification performance of differentiating obstructive lung
diseases, based on HRCT (High Resolution Computerized Tomography) images. The images were selected from HRCT
images, obtained from 82 subjects. For each image, two experienced radiologists selected rectangular ROIs with various
sizes (16x16, 32x32, and 64x64 pixels), representing each disease or normal lung parenchyma. Besides thirteen textural
features, we employed additional seven shape features; cluster shape features, and Top-hat transform features. To
evaluate the contribution of shape features for differentiation of obstructive lung diseases, several experiments were
conducted with two different types of classifiers and various ROI sizes. For automated classification, the Bayesian
classifier and support vector machine (SVM) were implemented. To assess the performance and cross-validation of the
system, 5-folding method was used. In comparison to employing only textural features, adding shape features yields
significant enhancement of overall sensitivity(5.9, 5.4, 4.4% in the Bayesian and 9.0, 7.3, 5.3% in the SVM), in the
order of ROI size 16x16, 32x32, 64x64 pixels, respectively (t-test, p<0.01). Moreover, this enhancement was largely
due to the improvement on class-specific sensitivity of mild centrilobular emphysema and bronchiolitis obliterans which
are most hard to differentiate for radiologists. According to these experimental results, adding shape features to
conventional texture features is much useful to improve classification performance of obstructive lung diseases in both
Bayesian and SVM classifiers.
Boundary refined texture segmentation on liver biopsy images for quantitative assessment of fibrosis severity
Author(s):
Enmin Song;
Renchao Jin;
Yu Luo;
Xiangyang Xu;
Chih-Cheng Hung;
Jianqiang Du
Show Abstract
We applied a new texture segmentation algorithm to improve the segmentation of boundary areas for distinction on the
liver needle biopsy images taken from microscopes for automatic assessment of liver fibrosis severity. It was difficult to
gain satisfactory segmentation results on the boundary areas of textures with some of existing texture segmentation
algorithms in our preliminary experiments. The proposed algorithm consists of three steps. The first step is to apply the
K-View-datagram segmentation method to the image. The second step is to find a boundary set which is defined as a set
including all the pixels with more than half of its neighboring pixels being classified into clusters other than that of itself
by the K-View-datagram method. The third step is to apply a modified K-view template method with a small scanning
window to the boundary set to refine the segmentation. The algorithm was applied to the real liver needle biopsy images
provided by the hospitals in Wuhan, China. Initial experimental results show that this new segmentation algorithm gives
high segmentation accuracy and classifies the boundary areas better than the existing algorithms. It is a useful tool for
automatic assessment of liver fibrosis severity.
Application of the scaling index method to µCT images of human trabecular bone for the characterization of biomechanical strength
Author(s):
Roberto A. Monetti;
Jan Bauer;
Dirk Müller;
Ernst Rummeny;
Maiko Matsuura;
Felix Eckstein;
Thomas Link;
Christoph Räth
Show Abstract
Osteoporosis is a metabolic bone disorder characterized by the loss of bone mineral density (BMD) and the
deterioration of the bone micro-architecture. Rarefied bone structures are more susceptible to fractures which
are the worst complications of osteoporosis. Here, we apply a structure characterization method, namely the
Scaling Index Method, to micro-computed tomographic (&mgr;-CT) images of the distal radius and extract 3D nonlinear
structure measures to assess the biomechanical properties of trabecular bone. Biomechanical properties
were quantified by the maximum compressive strength (MCS) obtained in a biomechanical test and bone mineral
density (BMD) was calculated using dual X-ray absorptiometry (DXA). &mgr;-CT images allow for the application of
two different modalities of the SIM which differ in the dimensional embedding of the image. Both representations
lead to similar correlation coefficients with MCS which are significantly better than the ones obtained using
standard 3D morphometric parameters and comparable to the result given by BMD. The analysis of &mgr;-CT
images based on the SIM allows for a sharp distinction of the different structural elements which compose the
trabecular bone network.
Comparative performance analysis of cervix ROI extraction and specular reflection removal algorithms for uterine cervix image analysis
Author(s):
Zhiyun Xue;
Sameer Antani;
L. Rodney Long;
Jose Jeronimo M.D.;
George R. Thoma
Show Abstract
Cervicography is a technique for visual screening of uterine cervix images for cervical cancer. One of our research goals
is the automated detection in these images of acetowhite (AW) lesions, which are sometimes correlated with cervical
cancer. These lesions are characterized by the whitening of regions along the squamocolumnar junction on the cervix
when treated with 5% acetic acid. Image preprocessing is required prior to invoking AW detection algorithms on
cervicographic images for two reasons: (1) to remove Specular Reflections (SR) caused by camera flash, and (2) to
isolate the cervix region-of-interest (ROI) from image regions that are irrelevant to the analysis. These image regions
may contain medical instruments, film markup, or other non-cervix anatomy or regions, such as vaginal walls. We have
qualitatively and quantitatively evaluated the performance of alternative preprocessing algorithms on a test set of 120
images. For cervix ROI detection, all approaches use a common feature set, but with varying combinations of feature
weights, normalization, and clustering methods. For SR detection, while one approach uses a Gaussian Mixture Model
on an intensity/saturation feature set, a second approach uses Otsu thresholding on a top-hat transformed input image.
Empirical results are analyzed to derive conclusions on the performance of each approach.
Evaluation of accuracy and workflow between different alignment techniques for correction of CTAC and PET misalignment in cardiac PET-CT imaging
Author(s):
Elizabeth B. Philps;
Sarah J. Aivano
Show Abstract
Small errors in the alignment between CT Attenuation Correction (CTAC) images and Positron Emission Tomography
(PET) acquisitions can result in significant changes in PET attenuation corrected images. Misalignment due to
respiratory or cardiac motion can produce mismatch between the PET and CTAC acquisitions. This contributes to
artifactual hypoperfusion defects that are interpretable as myocardial ischemia or infarct. Correction for the
misalignment between the PET and CTAC images can eliminate these false positive artifacts. Two methods for
correcting for this respiratory and cardiac misalignment were compared. The first was an existing procedure, the
manual-shift method, using point-to-point, in-plane, two-dimensional (2D) measurements of the shifts in axial, sagittal,
and coronal planes. A new PET image reconstruction using the corrected attenuation map shifted by the 2D
measurements was then performed. In the second method, the Interactive ACQC method, visual alignment was
performed between the left ventricle boundaries on fused images and automated calculation of necessary rigid three-dimensional
(3D) alignment parameters was performed. A new PET image reconstruction was then performed with an
attenuation map shifted by the prescribed alignment parameters. The two methods were compared for accuracy and
workflow efficiency using five cardiac PET/CT cases, scanned on GE Discovery VCT and Discovery ST systems.
Alignment measurements using the visual alignment process (the interactive ACQC method) improved productivity by
over five minutes, on average. The results show that the interactive ACQC procedure yields similar results to those of
the point-to-point procedure while providing improved workflow for cardiac PET attenuation correction quality control.
An optimized 3D context model for JPEG2000 Part 10
Author(s):
T. Bruylants;
A. Alecu;
T. Kimpe;
R. Deklerck;
A. Munteanu;
P. Schelkens
Show Abstract
The JPEG2000 standard is currently widely adopted in medical and volumetric data compression. In this respect, a 3D
extension (JPEG2000 Part 10 - JP3D) is currently being standardized. However, no suitable 3D context model is yet
available within the standard, such that the context-based arithmetic entropy coder of JP3D still uses the 2D context
model of JPEG2000 Part 1. In this paper, we propose a context design algorithm that, based on a training set, generates
an optimized 3D context model, while avoiding an exhaustive search and at the same time keeping the space and time
complexities well within the limits of today hardware. The algorithm comes as a solution for the situations in which the
number of allowable initial contexts is very large. In this sense, the three-dimensional 3x3x3 context neighborhood
investigated in this paper is a good example of an instantiation that would have otherwise been computationally
unfeasible. Furthermore, we have designed a new 3D context model for JP3D. We show that the JP3D codec equipped
with this model consistently outperforms its 2D context model counterpart, for an extended test dataset. In this respect,
we report a gain in lossless compression performance of up to 10%. Moreover, for a large range of bitrates, we always
obtain gains in PSNR, sometimes even over 3dB.
Compression of medical volumetric datasets: physical and psychovisual performance comparison of the emerging JP3D standard and JPEG2000
Author(s):
T. Kimpe;
T. Bruylants;
Y. Sneyders;
R. Deklerck;
P. Schelkens
Show Abstract
The size of medical data has increased significantly over the last few years. This poses severe problems for the rapid
transmission of medical data across the hospital network resulting into longer access times of the images. Also longterm
storage of data becomes more and more a problem. In an attempt to overcome the increasing data size often
lossless or lossy compression algorithms are being used.
This paper compares the existing JPEG2000 compression algorithm and the new emerging JP3D standard for
compression of volumetric datasets. The main benefit of JP3D is that this algorithm truly is a 3D compression algorithm
that exploits correlation not only within but also in between slices of a dataset. We evaluate both lossless and lossy
modes of these algorithms.
As a first step we perform an objective evaluation. Using RMSE and PSNR metrics we determine which compression
algorithm performs best and this for multiple compression ratios and for several clinically relevant medical datasets. It
is well known that RMSE and PSNR often do not correlate well with subjectively perceived image quality. Therefore
we also perform a psycho visual analysis by means of a numerical observer. With this observer model we analyze how
compression artifacts actually are perceived by a human observer. Results show superior performance of the new JP3D
algorithm compared to the existing JPEG2000 algorithm.
An interactive toolbox for atlas-based segmentation and coding of volumetric images
Author(s):
G. Menegaz;
S. Luti;
V. Duay;
J.-Ph. Thiran
Show Abstract
Medical imaging poses the great challenge of having compression algorithms that are lossless for diagnostic and
legal reasons and yet provide high compression rates for reduced storage and transmission time. The images
usually consist of a region of interest representing the part of the body under investigation surrounded by a
"background", which is often noisy and not of diagnostic interest. In this paper, we propose a ROI-based 3D
coding system integrating both the segmentation and the compression tools. The ROI is extracted by an atlas
based 3D segmentation method combining active contours with information theoretic principles, and the resulting
segmentation map is exploited for ROI based coding. The system is equipped with a GUI allowing the medical
doctors to supervise the segmentation process and eventually reshape the detected contours at any point. The
process is initiated by the user through the selection of either one pre-de.ned reference image or one image of
the volume to be used as the 2D "atlas". The object contour is successively propagated from one frame to the
next where it is used as the initial border estimation. In this way, the entire volume is segmented based on a
unique 2D atlas. The resulting 3D segmentation map is exploited for adaptive coding of the different image
regions. Two coding systems were considered: the JPEG3D standard and the 3D-SPITH. The evaluation of the
performance with respect to both segmentation and coding proved the high potential of the proposed system in
providing an integrated, low-cost and computationally effective solution for CAD and PAC systems.
A fast and efficient algorithm for volumetric medical data compression and retrieval
Author(s):
Linning Ye;
Jiangling Guo;
Sunanda Mitra;
Brian Nutter
Show Abstract
Two common approaches have been developed to compress volumetric medical data from sources such as magnetic
resonance imaging (MRI) and computed tomography (CT): (1) 2D-based compression methods, which compress each
image slice independently using 2D image codecs; and (2) 3D-based compression methods, which treat the data as true
volumetric data and compress using 3D image codecs. It has been shown that most 3D-based compression methods, such
as 3D-SPIHT, can achieve significantly higher compression quality than most 2D-based compression methods, such as
JPEG, JPEG-2000, and 2D-SPIHT. However, the compression/decompression speed is slow, and the high computational
complexity and high memory usage render 3D-based compressions difficult to implement in hardware. In this paper, we
propose a new 3D-based compression algorithm, 3D-BCWT, which is an extension to the computationally efficient
BCWT (Backward Coding of Wavelet Trees) algorithm [10]. 3D-BCWT not only can achieve the same high
compression quality as 3D-SPIHT does, but it can also provide extremely fast compression/decompression speed, low
complexity, and low memory usage, which are ideal for low-cost hardware and software implementations and for
compressing high resolution volumetric data. Moreover, 3D-BCWT also possesses the capabilities of progressive
transmission and decoding, such as progression of resolution and progression of quality, which are essential features for
efficient image retrieval from large online archives.
Perceptual coding of stereo endoscopy video for minimally invasive surgery
Author(s):
Guido Bartoli;
Gloria Menegaz;
Guang Zhong Yang
Show Abstract
In this paper, we propose a compression scheme that is tailored for stereo-laparoscope sequences. The inter-frame correlation
is modeled by the deformation field obtained by elastic registration between two subsequent frames and exploited
for prediction of the left sequence. The right sequence is lossy encoded by prediction from the corresponding left images.
Wavelet-based coding is applied to both the deformation vector fields and residual images. The resulting system supports
spatio temporal scalability, while providing lossless performance. The implementation of the wavelet transform by integer
lifting ensures a low computational complexity, thus reducing the required run-time memory allocation and on line implementation.
Extensive psychovisual tests were performed for system validation and characterization with respect to the
MPEG4 standard for video coding. Results are very encouraging: the PSVC system features the functionalities making it
suitable for PACS while providing a good trade-off between usability and performance in lossy mode.
Deblurring of tomosynthesis images using 3D anisotropic diffusion filtering
Author(s):
Xuejun Sun;
Walker Land;
Ravi Samala
Show Abstract
Breast tomosynthesis is an emerging state-of-the-art three-dimensional (3D) imaging technology that demonstrates
significant early promise in screening and diagnosing breast cancer. However, this kind of image has significant out-of-plane
artifacts due to its limited tomography nature, which affects the image quality and further would interrupt
interpretation. In this paper, we develop a robust deblurring method to remove or suppress blurry artifacts by applying
three-dimensional (3D) nonlinear anisotropic diffusion filtering method. Differential equation of 3D anisotropic
diffusion filtering is discretized using explicit and implicit numerical methods, respectively, combined by first (fixed
grey value) and second (adiabatic) boundary conditions under ten nearest neighbor grids configuration of finite
difference scheme. The discretized diffusion equation is applied in the breast volume reconstructed from the entire
tomosynthetic images of breast. The proposed diffusion filtering method is evaluated qualitatively and quantitatively on
clinical tomosynthesis images. Results indicate that the proposed diffusion filtering method is very powerful in
suppressing the blurry artifacts, and the results also indicate that implicit numerical algorithm with fixed value boundary
condition has better performance in enhancing the contrast of tomosynthesis image, demonstrating the effectiveness of
the proposed filtering method in deblurring the out-of-plane artifacts.
Digital tomosynthesis mammography: intra- and interplane artifact reduction for high-contrast objects on reconstructed slices using a priori 3D geometrical information
Author(s):
Jun Ge;
Heang-Ping Chan;
Berkman Sahiner;
Yiheng Zhang;
Jun Wei;
Lubomir M. Hadjiiski;
Chuan Zhou
Show Abstract
We are developing a computerized technique to reduce intra- and interplane ghosting artifacts caused by high-contrast
objects such as dense microcalcifications (MCs) or metal markers on the reconstructed slices of digital
tomosynthesis mammography (DTM). In this study, we designed a constrained iterative artifact reduction method
based on a priori 3D information of individual MCs. We first segmented individual MCs on projection views (PVs)
using an automated MC detection system. The centroid and the contrast profile of the individual MCs in the 3D breast
volume were estimated from the backprojection of the segmented individual MCs on high-resolution (0.1 mm isotropic
voxel size) reconstructed DTM slices. An isolated volume of interest (VOI) containing one or a few MCs is then
modeled as a high-contrast object embedded in a local homogeneous background. A shift-variant 3D impulse response
matrix (IRM) of the projection-reconstruction (PR) system for the extracted VOI was calculated using the DTM
geometry and the reconstruction algorithm. The PR system for this VOI is characterized by a system of linear equations.
A constrained iterative method was used to solve these equations for the effective linear attenuation coefficients (eLACs)
within the isolated VOI. Spatial constraint and positivity constraint were used in this method. Finally, the intra- and
interplane artifacts on the whole breast volume resulting from the MC were calculated using the corresponding impulse
responses and subsequently subtracted from the original reconstructed slices.
The performance of our artifact-reduction method was evaluated using a computer-simulated MC phantom, as
well as phantom images and patient DTMs obtained with IRB approval. A GE prototype DTM system that acquires 21
PVs in 3º increments over a ±30º range was used for image acquisition in this study. For the computer-simulated MC
phantom, the eLACs can be estimated accurately, thus the interplane artifacts were effectively removed. For MCs in
phantom and patient DTMs, our method reduced the artifacts but also created small over-corrected areas in some cases.
Potential reasons for this may include: the simplified mathematical modeling of the forward projection process, and the
amplified noise in the solution of the system of linear equations.
Visual enhancement of interval changes using a temporal subtraction technique
Author(s):
Dieter Seghers;
Dirk Loeckx;
Frederik Maes;
Paul Suetens
Show Abstract
Temporal subtraction is a visual enhancement technique to improve the detection of pathological changes from
medical images acquired at different times. Prior to subtracting a previous image from a current image, a nonrigid
warping of the two images might be necessary. As the nonrigid warping may change the size of pathological
lesions, the subtraction image can be misleading. In this paper we present an alternative subtraction technique to
avoid this problem. Instead of subtracting the intensities of corresponding voxels, a convolution filter is applied
to both images prior to subtraction. The technique is demonstrated for computed tomography images of the
lungs. It is shown that this method results in an improved visual enhancement of changing nodules compared
with the conventional subtraction technique.