This PDF file contains the front matter associated with SPIE Proceedings Volume 6918, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
With the advent of large, high-quality stereo display monitors and high-volume 3-D image acquisition sources, it is time
to revisit the use of 3-D display for diagnostic radiology.
Stereo displays may be goggled, or goggleless. Goggleless displays are called autostereographic displays. We
concentrate on autostereographic technologies. Commercial LCD flat-screen 3-D autostereographic monitors typically
rely on one of two techniques: blocked perspective and integral display.
On the acquisition modality side: MRI, CT and 3-D ultrasound provide 3-D data sets. However, helical/spiral CT with
multi-row detectors and multiple x-ray sources provides a monsoon of data. Presenting and analyzing this large amount
of potentially dynamic data will require advanced presentation techniques.
We begin with a very brief review the two stereo-display technologies. These displays are evolving beyond presentation
of the traditional pair of views directed to fixed positions of the eyes to multi-perspective displays; at differing head
positions, the eyes are presented with the proper perspective pairs corresponding to viewing a 3-D object from that
position. In addition, we will look at some of the recent developments in computer-generated holograms or CGH's. CGH
technology differs from the other two technologies in that it provides a wave-optically correct reproduction of the object.
We then move to examples of stereo-displayed medical images and examine some of the potential strengths and
weaknesses of the displays. We have installed a commercial stereo-display in our laboratory and are in the process of
generating stereo-pairs of CT data. We are examining, in particular, preprocessing of the perspective data.
In x-ray based imaging, attenuation depends on the type of tissue scanned and the average energy level of the x-ray
beam, which can be adjusted via the x-ray tube potential. Conventional computed tomography (CT) imaging uses a
single kV value, usually 120kV. Dual energy CT uses two different tube potentials (e.g. 80kV & 140kV) to obtain two
image datasets with different attenuation characteristics. This difference in attenuation levels allows for classification of
the composition of the tissues. In addition, the different energies significantly influence the contrast resolution and noise
characteristics of the two image datasets. 80kV images provide greater contrast resolution than 140kV, but are limited
because of increased noise. While dual-energy CT may provide useful clinical information, the question arises as to how
to best realize and visualize this benefit. In conventional single energy CT, patient image data is presented to the
physicians using well understood organ specific window and level settings. Instead of viewing two data series (one for
each tube potential), the images are most often fused into a single image dataset using a linear mixing of the data with a
70% 140kV and a 30% 80kV mixing ratio, as available on one commercial systems. This ratio provides a reasonable
representation of the anatomy/pathology, however due to the linear nature of the blending, the advantages of each dataset
(contrast or sharpness) is partially offset by its drawbacks (blurring or noise). This project evaluated a variety of organ
specific linear and non-linear mixing algorithms to optimize the blending of the low and high kV information for display
in a way that combines the benefits (contrast and sharpness) of both energies in a single image. A blinded review
analysis by subspecialty abdominal radiologists found that, unique, tunable, non-linear mixing algorithms that we
developed outperformed linear, fixed mixing for a variety of different organs and pathologies of interest.
In recent years, the number and utility of 3-D rendering frameworks has grown substantially. A quantitative and
qualitative evaluation of the capabilities of a subset of these systems is important to determine the applicability
of these methods to typical medical visualization tasks. The libraries evaluated in this paper include the Java3D
Application Programming Interface (API), Java OpenGL (Jogl) API, a multi-histogram software-based rendering
method, and the WildMagic API. Volume renderer implementations using each of these frameworks were
developed using the platform-independent Java programming language. Quantitative performance measurements
(frames per second, memory usage) were used to evaluate the strengths and weaknesses of each implementation.
Groupwise registration and statistical analysis of medical images are of fundamental importance in computational
anatomy, where healthy and pathologic anatomies are compared relative to their differences with a common
template. Accuracy of such approaches is primarily determined by the ability of finding perfectly conforming
shape transformations, which is rarely achieved in practice due to algorithmic limitations arising from biological
variability. Amount of the residual information not reflected by the transformation is, in fact, dictated by
template selection and is lost permanently from subsequent analysis. In general, an attempt to aggressively
minimize residual results in biologically incorrect correspondences, necessitating a certain level of regularity in
the transformation at the cost of accuracy.
In this paper, we introduce a framework for groupwise registration and statistical analysis of biomedical images
that optimally fuses the information contained in a diffeomorphism and the residual to achieve completeness of
representation. Since the degree of information retained in the residual depends on transformation parameters
such as the level of regularization, and template selection, our approach consists of forming an equivalence class
for each individual, thereby representing them via nonlinear manifolds embedded in high dimensional space. By
employing a minimum variance criterion and constraining the optimization to respective anatomical manifolds,
we proceed to determine their optimal morphological representation. A practical ancillary benefit of this approach
is that it yields optimal choice of transformation parameters, and eliminates respective confounding variation in
the data. Resultantly, the optimal signatures depend solely on anatomical variations across subjects, and may
ultimately lead to more accurate diagnosis through pattern classification.
Volume rendering is a technique for volume visualization. Given a set of N × N × N volume data, the traditional volume
rendering methods generally need O(N3) rendering time. The FVR (Fourier Volume Rendering), that takes advantage
of the Fourier slice theorem, takes O(N2log N) rendering time once the Fourier Transform of the volume data is
available. Thus the FVR is favor to designing a real-time rendering algorithm with a preprocessing step. But the FVR has
a disadvantage that resampling in the frequency domain causes artifacts in the spatial domain. Another problem is that
the method for designing a transfer function is not obvious. In this paper, we report that by using the spatial domain zero-padding
and tri-linear filtering can reduce the artifacts to an acceptable rendered image quality in spatial domain. To
design the transfer function, we present a method that the user can define a transfer function by using a Bezier curve
first. Based on the linear combination property of the Fourier transform and Bezier curve equation, the volume rendered
result can be obtained by adding the weighted frequency domain signals. That mean, once a transfer function is given,
we don't have to recompute the Fourier transform of the volume data after the transfer function applied. This technique
makes real-time adjustment of transfer function possible.
Preclinical research often requires the delivery of biological substances to specific locations in small animals.
Guiding a needle to targets in small animals with an error < 200 μm requires accurate registration. We are
developing techniques to register a needle-positioning robot to high-resolution three-dimensional ultrasound
and computed tomography small animal imaging systems. Both techniques involve moving the needle to predetermined
robot coordinates and determining corresponding needle locations in image coordinates. Registration
accuracy will therefore be affected by the robot positioning error and is assessed by measuring the target registration
error (TRE). A point-based registration between robot and micro-ultrasound coordinates was accomplished
by attaching a fiducial phantom onto the needle. A TRE of 145 μm was achieved when moving the needle to a set
of robot coordinates and registering the coordinates to needle tip locations determined from ultrasound fiducial
measurements. Registration between robot and micro-CT coordinates was accomplished by injecting barium sulfate
into tracks created when the robot withdraws the needle from a phantom. Points along cross-sectional slices
of the segmented needle tracks were determined using an intensity-weighted centroiding algorithm. A minimum
distance TRE of 194 ± 18 μm was achieved by registering centroid points to robot trajectories using the iterative
closest point (ICP) algorithm. Simulations, incorporating both robot and ultrasound fiducial localization errors,
verify that robot error is a significant component of the experimental registration. Simulations of micro-CT to
robot ICP registration similarly agree with the experimental results. Both registration techniques produce a
TRE < 200 μm, meeting design specification.
The paper is concerned with image registration algorithms for the alignment of computer tomography
(CT) and 3D-ultrasound (US) images of the liver. The necessity of registration arises from the surgeon's
request to benefit from the planning data during surgery. The goal is to align the planning data, derived
from pre-operative CT-images, with the current US-images of the liver acquired during the surgery.
The registration task is complicated by the fact, that the images are of a different modality, that the
US-images are severely corrupted by noise, and that the surgeon is looking for a fast and robust scheme.
To guide and support the registration, additional pairs of corresponding landmarks are prepared. We
will present two different approaches for registration. The first one is based on the pure alignment of
the landmarks using thin plate splines. It has been successfully applied in various applications and is
now transmitted to liver surgery. In the second approach, we mix a volumetric distance measure with
the landmark interpolation constraints. In particular, we investigate the promising normalized gradient
field distance measure. We use data from actual liver surgery to illustrate the applicability and the
characteristics of both approaches. It turns out that both approaches are suitable for the registration
of multi-modal images of the liver.
Tumor resections from the liver are complex surgical interventions. With recent planning software, risk analyses
based on individual liver anatomy can be carried out preoperatively. However, additional tumors within the
liver are frequently detected during oncological interventions using intraoperative ultrasound. These tumors are
not visible in preoperative data and their existence may require changes to the resection strategy. We propose
a novel method that allows an intraoperative risk analysis adaptation by merging newly detected tumors with a
preoperative risk analysis. To determine the exact positions and sizes of these tumors we make use of a navigated
ultrasound-system. A fast communication protocol enables our application to exchange crucial data with this
navigation system during an intervention.
A further motivation for our work is to improve the visual presentation of a moving ultrasound plane within
a complex 3D planning model including vascular systems, tumors, and organ surfaces. In case the ultrasound
plane is located inside the liver, occlusion of the ultrasound plane by the planning model is an inevitable problem
for the applied visualization technique. Our system allows the surgeon to focus on the ultrasound image while
perceiving context-relevant planning information. To improve orientation ability and distance perception, we
include additional depth cues by applying new illustrative visualization algorithms.
Preliminary evaluations confirm that in case of intraoperatively detected tumors a risk analysis adaptation
is beneficial for precise liver surgery. Our new GPU-based visualization approach provides the surgeon with
a simultaneous visualization of planning models and navigated 2D ultrasound data while minimizing occlusion
Author(s): Ruxandra Lasowski; Selim Benhimane; Jakob Vogel; Tobias F. Jakobs; Christoph J. Zech; Christoph Trumm; Christian Clason; Nassir Navab
Interventional procedures on deformable organs pose difficulties for the radiologists when inserting the probe
towards a lesion. The deformation due to the breathing makes a reliable and automated alignment of the
interventional 2D CT-Fluoro to the pre-interventional 3D CT-Volume very challenging. Such alignment is highly
desirable since, during the intervention, the CT-Volume brings more information as it is enhanced with contrast
agent and has a higher resolution than the CT-Fluoro slice. A reasonable solution for the alignment is obtained
by employing a robust optimization technique. However, since we would like to help the needle guidance towards
the lesion, due to the involved deformation, a single slice of the 3D CT-Volume is not satisfactory.
The main contribution of this paper consists in visualizing slices of the 3D CT-Volume that are resulting
from the out-of-plane motion parameters along weighted isosurfaces in the convergence basin of the similarity
function used during the alignment. This visualization copes with the uncertainty in estimating the deformation
and brings much more information than a single registered slice. Three experienced interventional radiologists
were consulted and their evaluation clearly highlighted that such visualization unfolding the neighborhood with
the belonging structures, like vessels and lesion spread, will help the needle guidance.
The European research network "Augmented reality in Surgery" (ARIS*ER) developed a system that supports
percutaneous radio frequency ablation of liver tumors. The system provides interventionists, during placement and
insertion of the RFA needle, with information from pre-operative CT images and real-time tracking data. A visualization
tool has been designed that aims to support (1) exploration of the abdomen, (2) planning of needle trajectory and (3)
insertion of the needle in the most efficient way. This work describes a first evaluation of the system, where user
performances and feedback of two visualization concepts of the tool - needle view and user view - are compared. After
being introduced to the system, ten subjects performed three needle placements with both concepts. Task fulfillment rate,
time for completion of task, special incidences, accuracy of needle placement recorded and analyzed. The results show
ambiguous results with beneficial and less favorable effects on user performance and workload of both concepts. Effects
depend on characteristics of intra-operative tasks as well as on task complexities depending on tumor location. The
results give valuable input for the next design steps.
Interactive image-guided liver surgery (Linasys device, Pathfinder Therapeutics, Inc., Nashville, TN) requires a user-oriented,
easy-to-use, fast segmentation preoperative surgical planning system. This system needs to build liver models
displaying the liver surface, tumors, and the vascular system of the liver. A robust and efficient tool for this purpose was
developed and evaluated. For the liver surface or other bulk shape organ segmentation, the delineation was conducted on
multiple slices of a CT image volume with a region growing algorithm. This algorithm incorporates both spatial and
temporal information of a propagating front to advance the segmenting contour. The user can reduce the number of
delineation slices during the processing by using interpolation. When comparing our liver segmentation results to those
from MeVis (Breman, Germany), the average overlap percentage was 94.6%. For portal and hepatic vein segmentation,
three-dimensional region growing based on image intensity was used. All second generation branches can be identified
without time-consuming image filtering and manual editing. The two veins are separated by using mutually exclusive
region growing. The tool can be used to conduct segmentation and modeling of the liver, veins, and other organs and can
prepare image data for export to Linasys within one hour.
As part of an ongoing theme in our laboratory on reducing morbidity during minimally-invasive intracardiac
procedures, we developed a computer-assisted intervention system that provides safe access inside the beating
heart and sufficient visualization to deliver therapy to intracardiac targets while maintaining the efficacy of the
procedure. Integrating pre-operative information, 2D trans-esophageal ultrasound for real-time intra-operative
imaging, and surgical tool tracking using the NDI Aurora magnetic tracking system in an augmented virtual
environment, our system allows the surgeons to navigate instruments inside the heart in spite of the lack of
direct target visualization. This work focuses on further enhancing intracardiac visualization and navigation by
supplying the surgeons with detailed 3D dynamic cardiac models constructed from high-resolution pre-operative
MR data and overlaid onto the intra-operative imaging environment. Here we report our experience during an in
vivo porcine study. A feature-based registration technique previously explored and validated in our laboratory
was employed for the pre-operative to intra-operative mapping. This registration method is suitable for in
vivo interventional applications as it involves the selection of easily identifiable landmarks, while ensuring a good
alignment of the pre-operative and intra-operative surgical targets. The resulting augmented reality environment
fuses the pre-operative cardiac model with the intra-operative real-time US images with approximately 5 mm
accuracy for structures located in the vicinity of the valvular region. Therefore, we strongly believe that our
augmented virtual environment significantly enhances intracardiac navigation of surgical instruments, while on-target
detailed manipulations are performed under real-time US guidance.
A 2D ultrasound enhanced virtual reality surgical guidance system has been under development for some time in
our lab. The new surgical guidance platform has been shown to be effective in both the laboratory and clinical
settings, however, the accuracy of the tracked 2D ultrasound has not been investigated in detail in terms of the
applications for which we intend to use it (i.e., mitral valve replacement and atrial septal defect closure). This
work focuses on the development of an accuracy assessment protocol specific to the assessment of the calibration
methods used to determine the rigid transformation between the ultrasound image and the tracked sensor.
Specifically, we test a Z-bar phantom calibration method and a phantomless calibration method and compared
the accuracy of tracking ultrasound images from neuro, transesophageal, intracardiac and laparoscopic ultrasound
transducers. This work provides a fundamental quantitative description of the image-guided accuracy that can
be obtained with this new surgical guidance system.
Intraoperative ultrasound (iUS) has emerged as a practical neuronavigational tool in image-guided open cranial
procedures because of its low cost, easy implementation and real time image acquisition. Two-dimensional iUS (2DiUS)
is currently the most common ultrasonic imaging tool used in the operating room (OR). However, gaps between imaging
planes and limited volumetric sampling with 2DiUS often result in incomplete imaging of the internal anatomical
structures of interest (e.g., tumor). In this paper, we investigate and evaluate the use of coregistered volumetric true
three-dimensional iUS (3DiUS) generated from a broadband matrix array transducer (X3-1) attached to a Phillips iU22
intelligent ultrasound system. This 3DiUS scheme is able to provide full 3D sampling over a frustum-shaped volume
with high resolution dicom images directly recovered by the ultrasound system without the need for free-hand sweeps or
3D reconstruction. Volumetric 3DiUS images were co-registered with preoperative magnetic resonance (pMR) images
by tracking the spatial location and orientation of an infrared light-emitting tracker rigidly attached to the US scan-head
following a fiducial registration and an iUS scan-head calibration. The registration was further refined using an imagebased
scheme to maximize the inter-image normalized mutual information. In addition, we have utilized a coordinate
system nomenclature and developed a set of static visualization techniques to present 3D US image data in the OR,
which will be important for qualitative and quantitative analyses of the performance of 3DiUS in image-guided
neurosurgery in the future. We show that 3DiUS significantly improves the imaging efficiency and enhances integration
of iUS into the surgical workflow, making it appear to be promising for routine use in the OR.
To ensure precise needle placement in soft tissue of a patient for e.g. biopsies, the intervention is normally carried
out image-guided. Whereas there are several imaging modalities such as computed tomography, magnetic resonance tomography, ultrasound, or C-arm X-ray systems with CT-option, navigation systems for such minimally invasive interventions are still quite rare. However, prototypes and also first commercial products of optical and electromagnetic tracking systems demonstrated excellent clinical results. Such systems provide a reduction of control scans, a reduction of intervention time, and an improved needle positioning accuracy specially for deep and double oblique access. Our novel navigation system CAPPA IRAD EMT with electromagnetic tracking for minimally invasive needle applications is connected to a C-arm imaging system with CT-option. The navigation system was investigated in clinical interventions by different physicians and with different clinical applications. First clinical results demonstrated a high accuracy during needle placement and a reduction of control scans.
Author(s): Sheng Xu; Jochen Kruecker; Hui Jiang; Scott Settlemier; Neil Glossop; Aradhana Venkatesan; Anthony Kam; Bradford Wood
This paper presents an ultrasound guidance system for needle placement procedures. The system integrates a real-time
3D ultrasound transducer with a 3D localizer and a tracked needle to enable real-time visualization of the needle in
ultrasound. The system uses data streaming to transfer real-time ultrasound volumetric images to a separate workstation
for visualization. Multi-planar reconstructions of the ultrasound volume are computed at the workstation using the
tracking information, allowing for real-time visualization of the needle in ultrasound without aligning the needle with the
transducer. The system may simplify the needle placement procedure and potentially reduce the levels of skill and
training needed to perform accurate needle placements. The physician can therefore focus on the needle placement
procedure without paying extra attention to perfect mid-plane alignment of the needle with the ultrasound image plane.
In addition, the physician has real-time visual feedback of the needle and the target, even before the needle enters the
patient's skin, allowing the procedure to be easily, safely and accurately planned. The superimposed needle can also
greatly improve the sometimes poor visualization of the needle in an ultrasound image (e.g. in between ribs). Since the
free-hand needle is not inserted through any fixed needle channel, the physician can enjoy full freedom to select the
needle's orientation or position. No cumbersome accessories are attached to the ultrasound transducer, allowing the
physician to use his or her previous experience with regular ultrasound transducers. 3D Display of the target in relation
to the treatment volume can help verify adequacy of tumor ablation as well.
Estimating the alignment accuracy is an important issue in rigid-body point-based registration algorithms. The
registration accuracy depends on the level of the noise perturbing the registering data sets. The noise in the
data sets arises from the fiducial (point) localization error (FLE) that may have an identical or inhomogeneous,
isotropic or anisotropic distribution at each point in each data set. Target registration error (TRE) has been
defined in the literature, as an error measure in terms of FLE, to compute the registration accuracy at a
point (target) which is not used in the registration process. In this paper, we mathematically derive a general
solution to approximate the distribution of TRE after registration of two data sets in the presence of FLE having
any type of distribution. The Maximum Likelihood (ML) algorithm is proposed to estimate the registration
parameters and their variances between two data sets. The variances are then used in a closed-form solution,
previously presented by these authors, to derive the distribution of TRE at a target location. Based on numerical
simulations, it is demonstrated that the derived distribution of TRE, in contrast to the existing methods in the
literature, accurately follows the distribution generated by Monte Carlo simulation even when FLE has an
inhomogeneous isotropic or anisotropic distribution.
We present here a framework for a system that tracks one or more 3D anatomical
targets without the need for a preoperative 3D image. Multiple 2D projection images are taken using a tracked, calibrated fluoroscope. The user manually locates each target on each of the fluoroscopic views. A least-squares minimization algorithm triangulates the best-fit position of each target in the 3D space of the tracking system: using the known projection matrices from 3D space into image space, we use matrix minimization to find the 3D position that projects closest to the located target positions in the 2D images. A tracked endoscope, whose projection geometry has been pre-calibrated, is then introduced to the operating field. Because the position of the targets in the tracking space is known, a rendering of the targets may be projected onto the endoscope view, thus allowing the endoscope to be easily brought into the target vicinity even when the endoscope field of view is blocked, e.g. by blood or tissue. An example application for such a device is trauma surgery, e.g., removal of a foreign object. Time, scheduling considerations and concern about excessive radiation exposure
may prohibit the acquisition of a 3D image, such as a CT scan, which is required for traditional image guidance systems; it is however advantageous to have 3D information about the target locations available, which is not possible using fluoroscopic guidance alone.
Fluoroscopy is widely used for intra-procedure image guidance, however its planar images provide limited information
about the location of the surgical tools or targets in three-dimensional space. An iterative method based on the
projection-Procrustes technique can determine the three-dimensional positions and orientations of known sparse objects
from a single, perspective projection. We assess the feasibility of applying this technique to track surgical tools by
measuring its accuracy and precision through in vitro experiments. Two phantoms were fabricated to perform this
assessment: a grid plate phantom with numerous point-targets at regular distances from each other; and a sparse object
used as a surgical tool phantom. Two-dimensional projections of the phantoms were acquired using an image
intensifier-based C-arm x-ray unit. The locations of the markers projected onto the images were identified and measured
using an automated algorithm. The three-dimensional location of the phantom tool tip was identified from these images
using the projection-Procrustes technique. The accuracy and precision of the tip localization were used to assess our
technique. The average three-dimensional root-mean-square target registration error of the phantom tool tip was 1.8
mm. The average three-dimensional root-mean-square precision of localizing the tool tip was 0.5 mm.
Author(s): Lorenz Fieten; Jörg Eschweiler; Matías de la Fuente; Sascha Gravius; Klaus Radermacher
Precise knowledge of the mid-sagittal plane is important for the assessment and correction of several deformities.
Furthermore, the mid-sagittal plane can be used for the definition of standardized coordinate systems such as pelvis or
skull coordinate systems. A popular approach for mid-sagittal plane computation is based on the selection of anatomical
landmarks located either directly on the plane or symmetrically to it. However, the manual selection of landmarks is a
tedious, time-consuming and error-prone task, which requires great care. In order to overcome this drawback, previously
it was suggested to use the iterative closest point (ICP) algorithm: After an initial mirroring of the data points on a
default mirror plane, the mirrored data points should be registered iteratively to the model points using rigid transforms.
Finally, a reflection transform approximating the cumulative transform could be extracted. In this work, we present an
ICP variant for the iterative optimization of the reflection parameters. It is based on a closed-form solution to the least-squares
problem of matching data points to model points using a reflection. In experiments on CT pelvis and skull
datasets our method showed a better ability to match homologous areas.
Many image-guidance surgical systems rely on rigid-body, point-based registration of fiducial markers attached to the
patient. Marker locations in image space and physical space are used to provide the transformation that maps a point
from one space to the other. Target registration error (TRE) is known to depend on the fiducial localization error (FLE),
and the fiducial registration error (FRE) of a set of markers, though a poor predictor of TRE, is a useful predictor of FLE.
All fiducials are typically weighted equally for registration purposes, but is also a common practice to ignore a marker at
position r by zeroing its weight when its individual error,
FRE(r), is high in an effort to reduce TRE. The idea is that
such markers are likely to have been compromised, i.e., perturbed badly between imaging and surgery. While ignoring a
compromised marker may indeed reduce TRE, the expected effect of ignoring an uncompromised marker is to increase
TRE. There is unfortunately no established method for deciding whether a given marker is likely to have been
compromised. In order to make this decision, it is necessary to know the probability distribution p(FRE(r)), which has
not been heretofore determined. With such a distribution, it may be possible to identify a compromised marker and to
adjust its weight in order to improve the expected TRE. In this paper we derive an approximate formula for p(FRE(r))
accurate to first order in FLE. We show by means of numerical simulations that the approximation is valid.
Segmentation of organs in medical images is a difficult task requiring very often the use of model-based approaches.
To build the model, we need an annotated training set of shape examples with correspondences
indicated among shapes. Manual positioning of landmarks is a tedious, time-consuming, and error prone task,
and almost impossible in the 3D space. To overcome some of these drawbacks, we devised an automatic method
based on the notion of c-scale, a new local scale concept. For each boundary element b, the arc length of the
largest homogeneous curvature region connected to b is estimated as well as the orientation of the tangent at b.
With this shape description method, we can automatically locate mathematical landmarks selected at different
levels of detail. The method avoids the use of landmarks for the generation of the mean shape. The selection of
landmarks on the mean shape is done automatically using the c-scale method. Then, these landmarks are propagated
to each shape in the training set, defining this way the correspondences among the shapes. Altogether
12 strategies are described along these lines. The methods are evaluated on 40 MRI foot data sets, the object of
interest being the talus bone. The results show that, for the same number of landmarks, the proposed methods
are more compact than manual and equally spaced annotations. The approach is applicable to spaces of any
dimensionality, although we have focused in this paper on 2D shapes.
We present a novel method to calibrate a 3D ultrasound probe which has a 2D transducer array. By optically tracking a calibrated 3D probe we are able to produce extended field of view 3D ultrasound images. Tracking also enables us to register our ultrasound images to other tracked and calibrated surgical instruments or to other tracked and calibrated imaging devices. Our method applies rigid intensity-based image registration to three or more ultrasound images. These images can either be of a simple phantom, or could potentially be images of the patient. In this latter case we would have an automated calibration system which required no phantom, no image segmentation and was optimized to the patient's ultrasound characteristics i.e. speed of sound. We have carried out experiments using a simple calibration phantom and with ultrasound images of a volunteer's liver. Results are compared to an independent gold-standard. These showed our method to be accurate to 1.43mm using the phantom images and 1.56mm using the liver data, which is slightly better than the traditional point-based calibration method (1.7mm in our experiments).
We propose a novel system for image guidance in totally endoscopic coronary artery bypass (TECAB). A key requirement
is the availability of 2D-3D registration techniques that can deal with non-rigid motion and deformation. Image guidance
for TECAB is mainly required before the mechanical stabilization of the heart, thus the most dominant source of non-rigid
deformation is the motion of the beating heart.
To augment the images in the endoscope of the da Vinci robot, we have to find the transformation from the coordinate
system of the preoperative imaging modality to the system of the endoscopic cameras.
In a first step we build a 4D motion model of the beating heart. Intraoperatively we can use the ECG or video processing
to determine the phase of the cardiac cycle. We can then take the heart surface from the motion model and register it to
the stereo-endoscopic images of the da Vinci robot using 2D-3D registration methods. We are investigating robust feature
tracking and intensity-based methods for this purpose.
Images of the vessels available in the preoperative coordinate system can then be transformed to the camera system and
projected into the calibrated endoscope view using two video mixers with chroma keying. It is hoped that the augmented
view can improve the efficiency of TECAB surgery and reduce the conversion rate to more conventional procedures.
Surgical repair of the mitral valve is preferred in most cases over valve replacement, but replacement is often performed
instead due to the technical difficulty of repair. A surgical planning system based on patient-specific medical images that
allows surgeons to simulate and compare potential repair strategies could greatly improve surgical outcomes. In such a
surgical simulator, the mathematical model of mechanics used to close the valve must be able to compute the closed state
quickly and to handle the complex boundary conditions imposed by the chords that tether the valve leaflets. We have
developed a system for generating a triangulated mesh of the valve surface from volumetric image data of the opened
valve. We then compute the closed position of the mesh using a mass-spring model of dynamics. The triangulated mesh
is produced by fitting an isosurface to the volumetric image data, and boundary conditions, including the valve annulus
and chord endpoints, are identified in the image data using a graphical user interface. In the mass-spring model, triangle
sides are treated as linear springs, and sides shared by two triangles are treated as bending springs. Chords are treated as
nonlinear springs, and self-collisions are detected and resolved. Equations of motion are solved using implicit numerical
integration. Accuracy was assessed by comparison of model results with an image of the same valve taken in the closed
state. The model exhibited rapid valve closure and was able to reproduce important features of the closed valve.
Author(s): Corinna S. Maier; Michael Bock; Wolfhard Semmler; Christine H. Lorenz
A new framework for image based physiological cardiac monitoring is proposed based on repeated imaging of critical slice locations in an interventional MRI environment. The aim of this work is to provide a method of detecting pathological changes in the left ventricular (LV) myocardial wall motion where the standard ECG methods are not possible due to distortions by the magnetic field. First MRI LV short axis images are acquired for different phases of the cardiac cycle over RR intervals. Then LV contours are detected based on an established segmentation algorithm. The contour's Fourier Descriptors are calculated to classify myocardial wall into two classes: contracted or not contracted. The classifier is trained during an initial observation period before a pathological change might occur during an intervention. A contour rejected by the classifier using the unconditional, predictive probability of the contour's observation vector as confidence measure is interpreted as a probably pathologic change in the LV myocardial wall motion. To evaluate the performance of the classifier a simple model is introduced for simulating the contours of a pathological, ischemic, LV myocardial wall. The overall performance of the classifier on 516 samples based on healthy volunteer images and 3096 simulated ischemic samples yielded a mean classification error for supervised training of 5.7% and for unsupervised training of 8.7%.
Author(s): C. Kuehnel; A. Hennemuth; S. Oeltze; T. Boskamp; H.-O. Peitgen
The diagnosis support in the field of coronary artery disease (CAD) is very complex due to the numerous
symptoms and performed studies leading to the final diagnosis. CTA and MRI are on their way to replace invasive
catheter angiography. Thus, there is a need for sophisticated software tools that present the different analysis
results, and correlate the anatomical and dynamic image information. We introduce a new software assistant
for the combined result visualization of CTA and MR images, in which a dedicated concept for the structured
presentation of original data, segmentation results, and individual findings is realized. Therefore, we define a
comprehensive class hierarchy and assign suitable interaction functions. User guidance is coupled as closely as
possible with available data, supporting a straightforward workflow design. The analysis results are extracted
from two previously developed software assistants, providing coronary artery analysis and measurements, function
analysis as well as late enhancement data investigation. As an extension we introduce a finding concept directly
relating suspicious positions to the underlying data. An affine registration of CT and MR data in combination
with the AHA 17-segment model enables the coupling of local findings to positions in all data sets. Furthermore,
sophisticated visualization in 2D and 3D and interactive bull's eye plots facilitate a correlation of coronary
stenoses and physiology. The software has been evaluated on 20 patient data sets.
Breast cancer is a leading cause of death in women. Tumours are usually detected by palpation or X-ray mammography
followed by further imaging, such as magnetic resonance imaging (MRI) or ultrasound. The aim of this research is to
develop a biophysically-based computational tool that will allow accurate collocation of features (such as suspicious
lesions) across multiple imaging views and modalities in order to improve clinicians' diagnosis of breast cancer. We
have developed a computational framework for generating individual-specific, 3D finite element models of the breast.
MR images were obtained of the breast under gravity loading and neutrally buoyant conditions. Neutrally buoyant breast
images, obtained whilst immersing the breast in water, were used to estimate the unloaded geometry of the breast (for
present purposes, we have assumed that the densities of water and breast tissue are equal). These images were segmented
to isolate the breast tissues, and a tricubic Hermite finite element mesh was fitted to the digitised data points in order to
produce a customized breast model. The model was deformed, in accordance with finite deformation elasticity theory, to
predict the gravity loaded state of the breast in the prone position. The unloaded breast images were embedded into the
reference model and warped based on the predicted deformation. In order to analyse the accuracy of the model
predictions, the cross-correlation image comparison metric was used to compare the warped, resampled images with the
clinical images of the prone gravity loaded state. We believe that a biomechanical image registration tool of this kind
will aid radiologists to provide more reliable diagnosis and localisation of breast cancer.
Software breast phantoms offer greater flexibility in generating synthetic breast images compared to physical phantoms.
The realism of such generated synthetic images depends on the method for simulating the three-dimensional breast
anatomical structures. We present here a novel algorithm for computer simulation of breast anatomy. The algorithm
simulates the skin, regions of predominantly adipose tissue and fibro-glandular tissue, and the matrix of adipose tissue
compartments and Cooper's ligaments. The simulation approach is based upon a region growing procedure; adipose
compartments are grown from a selected set of seed points with different orientation and growth rate. The simulated
adipose compartments vary in shape and size similarly to the anatomical breast variation, resulting in much improved
phantom realism compared to our previous simulation based on geometric primitives. The proposed simulation also has
an improved control over the breast size and glandularity. Our software breast phantom has been used in a number of
applications, including breast tomosynthesis and texture analysis optimization.
In order to facilitate the removal of tumors during partial nephrectomies, an image-guided surgery system may be useful.
This system would require a registration of the physical kidney to a pre-operative image volume; however, it is unclear
whether a rigid registration would be sufficient. One possible source of non-rigid deformation is the clamping of the
renal artery during surgery and the subsequent loss of pressure as the kidney is punctured and blood loss occurs. To
explore this issue, a model of kidney deformation due to loss of perfusion and pressure was developed based on Biot's
consolidation model. The model was tested on two resected porcine kidneys in which the renal artery and vein were
clamped. CT image volumes of the kidney were obtained before and after the deformation caused unclamping, and
fiducial markers embedded on the kidney surface allowed the deformation to be tracked. The accuracy of the kidney
model was accessed by calculating the model error at the fiducial locations and using image similarity measures.
Preliminary results indicate that the model may be useful in a non-rigid registration scheme; however, further
refinements to the model may be necessary to better simulate the deformation due to loss of perfusion and pressure.
Minimally invasive surgery has gained significantly in importance over the last decade due to the numerous advantages on patient-side. The surgeon has to adapt special operation-techniques and deal with difficulties like the complex hand-eye coordination, limited field of view and restricted mobility. To alleviate these constraints we propose to enhance the surgeon's capabilities by providing a context-aware assistance using augmented reality (AR) techniques. In order to generate a context-aware assistance it is necessary to recognize the current state of the intervention using intraoperatively gained sensor data and a model of the surgical intervention. In this paper we present the recognition of risk situations, the system warns the surgeon if an instrument gets too close to a risk structure. The context-aware assistance system starts with an image-based analysis to retrieve information from the endoscopic images. This information is classified and a semantic description is generated. The description is used to recognize the current state and launch an appropriate AR visualization. In detail we present an automatic vision-based instrument tracking to obtain the positions of the instruments. Situation recognition is performed using a knowledge representation based on a description logic system. Two augmented reality visualization programs are realized to warn the surgeon if a risk situation occurs.
Author(s): Amaury Saragaglia; Catalin Fetita; Françoise Prêteux; Philippe Grenier
The problem of quantifying bronchial parameters from multi-detector computed tomography (MDCT) data has
been highly studied in medical research. While the developed methods have been tested and validated on cylindrical
computer/physical phantoms or by experimented radiologists on in-vivo/in-vitro CT image data, today
there is no established ground truth enabling to compare the different results. This paper proposes an original
approach allowing to simulate CT image acquisitions of realistic 3D bronchus-vessel configurations starting from
mesh models of perfectly known parameters, with easily modifiable geometry and topology according to different
pathology characteristics. The bronchial simulator platform, 3DAirSim, is composed of several modules: 1) 3D
model generation of bronchus inner and outer wall surfaces of different calibers, shapes and orientations, 2)
texture volume creation corresponding to the lung parenchyma including or not blood vessels, 3) simulation of
CT image acquisition mimicking the scanning process. The proposed model generation method relies on the
construction of a consistent 2-manifold surface of a branching tubular structure with given medial axis and local
radii. First, a coarse triangular mesh is created by connecting polygonal cross-sections along the medial axis. The
model is then refined and locally deformed in the surface normal direction under specific force constraints which
stabilize its evolution at the level of the input radii. By generating a pathology-specific database, 3DAirSim will
contribute to the creation of a test-bed for bronchial parameter quantification. 3DAirSim is currently used to
lead various validations of existing approaches with respect to the clinical objective of airway wall remodeling
A colon resection, necessary in case of colon cancer, can be performed minimally invasively by laparoscopy.
Before the affected part of the colon can be removed, however, the colon must be mobilized. A good technique
for mobilizing the colon is to use Gerota's fascia as a guiding structure, i. e. to dissect along this fascia, without
harming it. The challenge of this technique is that Gerota's fascia is usually difficult to distinguish from other
In this paper, we present an approach to enhance the visual contrast between fatty tissue covered by Gerota's
fascia and uncovered fatty tissue, and the contrast of both structures to the remaining soft tissue in real time
(50 fields per second). As fasciae are whitish transparent tissues, they cannot be identified by means of their color
itself. Instead, we found that their most prominent feature to distinguish is the color saturation. To enhance
their visible contrast, we applied a non-linear transformation to the saturation.
An off-line evaluation was carried out consulting two specialists in laparoscopic colon resection. We presented
them four scenes from two different interventions in which our enhancement was applied together with the
original scenes. These scenes did not only contain situations where Gerota's fascia had to be found, but also
situations where aerosol from ultrasonically activated scissors inhibited the clear vision, or situations where
critical structures such as the ureter or nerves had to be identified under fascial tissue. The surgeons stated that
our algorithm clearly offered an information gain in all of the presented scenes, and that it did not impair the
clear vision in case of aerosol or the visibility of critical structures. So the colon mobilization could be carried
out easier, faster, and safer.
In the subsequent clinical on-line evaluation, the specialists confirmed the positive effect of the proposed algorithm
on the visibility of Gerota's fascia.
We propose a new projector-based augmented reality (PBAR) system which can project the image of forceps and a
surgical target simultaneously for support of laparoscopic surgery. A compensation method of an error arisen from
motion of a body is also proposed to improve the quality of the projection images. It is shown that the system is
significant for the forceps insertion by the experiments using the dry-box.
For exact orientation inside the tracheobronchial tree, clinicians would greatly profit from a soft tissue navigation
system for bronchoscopy. Such an image guided system which gives the ability to show the current position of
a bronchoscope (an instrument to inspect the inside of the lung) or a catheter within the tracheobronchial tree,
significantly improves orientation inside the complex airway structure and the depth of insertion into it. A major
challenge for a bronchoscopy navigation system is respiratory motion. Recently, more and more developments of
navigated bronchoscopy systems use the tracheobronchial centerline in order to develop a compensation for respiratory
motion. The implementation and evaluation of the compensation algorithms are assisted by a simulation
environment, that provides tracking data similar to the data that has to be processed during a bronchoscopic
intervention. Thus we developed an evaluation environment which simulates a random insertion of a tracking
sensor into a tracheobronchial tree, adding electromagnetic noise and distortion similar to an operating table,
and harmonic respiratory motion to the tracked position. With this environment, a high number of insertion
tracks can be created and used to optimize methods for minimizing the electromagnetic tracking error and compensating
respiratory movement. The authors encourage other researchers to use this evaluation environment to
test different correction and estimation algorithms for navigated bronchoscopy.
Endoscopic images suffer from a fundamental spatial distortion due to the wide angle design of the endoscope lens. This
barrel-type distortion is an obstacle for subsequent Computer Aided Diagnosis (CAD) algorithms and should be
corrected. Various methods and research models for the barrel-type distortion correction have been proposed and
studied. For industrial applications, a stable, robust method with high accuracy is required to calibrate the different types
of endoscopes in an easy of use way. The correction area shall be large enough to cover all the regions that the
physicians need to see. In this paper, we present our endoscope distortion correction procedure which includes data
acquisition, distortion center estimation, distortion coefficients calculation, and look-up table (LUT) generation. We
investigate different polynomial models used for modeling the distortion and propose a new one which provides
correction results with better visual quality. The method has been verified with four types of colonoscopes. The
correction procedure is currently being applied on human subject data and the coefficients are being utilized in a
subsequent 3D reconstruction project of colon.
An endoscope is a commonly used instrument for performing minimally invasive visual examination of the tissues inside the body. A physician uses the endoscopic video images to identify tissue abnormalities. The images, however, are highly dependent on the optical properties of the endoscope and its orientation and location with respect to the tissue structure. The analysis of endoscopic video images is, therefore, purely subjective. Studies suggest that the fusion of endoscopic video images (providing color and texture information) with virtual endoscopic views (providing structural information) can be useful for assessing various pathologies for several applications: (1) surgical simulation, training, and pedagogy; (2) the creation of a database for pathologies; and (3) the building of patient-specific models. Such fusion requires both geometric and radiometric alignment of endoscopic video images in the texture space. Inconsistent estimates of texture/color of the tissue surface result in seams when multiple endoscopic video images are combined together. This paper (1) identifies the endoscope-dependent variables to be calibrated for objective and consistent estimation of surface texture/color and (2) presents an integrated set of methods to measure them. Results show that the calibration method can
be successfully used to estimate objective color/texture values for simple planar scenes, whereas uncalibrated endoscopes performed very poorly for the same tests.
Author(s): Jochen Krücker; Sheng Xu; Neil Glossop; William F. Pritchard; John Karanian; Alberto Chiesa; Bradford J. Wood
Organ motion was quantified and motion compensation strategies for soft-tissue navigation were evaluated in a porcine
model. Organ motion due to patient repositioning, and respiratory motion during ventilated breathing were quantified.
Imaging was performed on a 16-slice CT scanner. Organ motion due to repositioning was studied by attaching 7
external skin fiducials and inserting 7 point fiducials in the livers of ventilated pigs. The pigs were imaged repeatedly in
supine and decubitus positions. Registrations between the images were obtained using either all external fiducials or 6
of the 7 internal fiducials. Target registration errors (TRE) were computed by using the leave-one-out technique.
Respiratory organ motion was studied by inserting 7 electromagnetically (EM) tracked needles in the livers of 2 pigs.
One needle served as primary target, the remaining six served as reference needles. In addition, 6 EM tracked skin
fiducials, 5 passive skin fiducials, and one dynamic reference tracker were attached. Registrations were obtained using
three different methods: Continuous registration with the tracking data from internal and external tracked fiducials, and
one-time registration using the passive skin fiducials and a tracked pointer with dynamic reference tracking. The TRE
for registering images obtained in supine position after an intermittent decubitus position ranged from 3.3 mm to 24.6
mm. Higher accuracy was achieved with internal fiducials (mean TRE = 6.4 mm) than with external fiducials (mean
TRE = 16.7 mm). During respiratory motion, the FRE and TRE were shown to be correlated and were used to
demonstrate automatic FRE-based gating. Tracking of target motion relative to a reference time point was achieved by
registering nearby reference trackers with rigid and affine transformations. Linear motion models based on external and
internal reference trackers were shown to reduce the target motion by up to 63% and 90%, respectively.
Nowadays, hepatic artery catheterizations are performed under live 2D X-ray fluoroscopy guidance, where the visualization of blood vessels requires the injection of contrast agent. The projection of a 3D static roadmap of the complex branches of the liver artery system onto 2D fluoroscopy images can aid catheter navigation and minimize the use of contrast agent. However, the presence of a significant hepatic motion due to patient's respiration necessitates a real-time
motion correction in order to align the projected vessels. The objective of our work is to introduce dynamic roadmaps into
clinical workflow for hepatic artery catheterizations and allow for continuous visualization of the vessels in 2D fluoroscopy
images without additional contrast injection. To this end, we propose a method for real-time estimation of the apparent displacement of the hepatic arteries in 2D flouroscopy images. Our approach approximates respiratory motion of hepatic arteries from the catheter motion in 2D fluoroscopy images. The proposed method consists of two main steps. First, a filtering is applied to 2D fluoroscopy images in order to enhance the catheter and reduce the noise level. Then, a part of the catheter is tracked in the filtered images using template matching. A dynamic template update strategy makes our method robust to deformations. The accuracy and robustness of the algorithm are demonstrated by experimental studies on 22 simulated and 4 clinical sequences containing 330 and 571 image frames, respectively.
This paper presents a technique for compensating for respiratory motion and deformation in an augmented
reality system for cardiac catheterisation procedures. The technique uses a subject-specific affine model of
cardiac motion which is quickly constructed from a pre-procedure magnetic resonance imaging (MRI) scan.
Respiratory phase information is acquired during the procedure by tracking the motion of the diaphragm in
real-time X-ray images. This information is used as input to the model which uses it to predict the position
of structures of interest during respiration. 3-D validation is performed on 4 volunteers and 4 patients using a
leave-one-out test on manually identified anatomical landmarks in the MRI scan, and 2-D validation is performed
by using the model to predict the respiratory motion of structures of the heart which contain catheters that are
visible in X-ray images. The technique is shown to reduce 3-D registration errors due to respiratory motion from
up to 15mm down to less than 5mm, which is within clinical requirements for many procedures. 2-D validation
showed that accuracy improved from 14mm to 2mm. In addition, we use the model to analyse the effects of
different types of breathing on the motion and deformation of the heart, specifically increasing the breathing rate
and depth of breathing. Our findings suggest that the accuracy of the model is reduced if the subject breathes
in a different way during model construction and application. However, models formed during deep breathing
may be accurate enough to be applied to other types of breathing.
We are currently investigating the acquisition of 4D cone-beam CT data using retrospective gating of the X-ray projection images. This approach requires a respiratory signal that is synchronized with image acquisition. To obtain such a signal we propose to use a spherical fiducial attached to the patient's skin surface such that it
is visible in the images. A region of interest containing the fiducial is manually identified in an initial image and is then automatically detected in all other images. Subsequently, we perform an approximate spatial (3D) reconstruction of the marker location from its 2D locations. Finally, we compute a respiratory signal by projecting the 3D points onto the major axis estimated via principle component analysis. As this respiratory signal was obtained from the fiducial location in each of the images it is implicitly synchronized with image acquisition. We evaluate the robustness of our fiducial detection using an anthropomorphic respiratory phantom. To evaluate the quality of the estimated respiratory signal we use a motion platform that follows the respiratory motion obtained by tracking the skin surface of a volunteer. We show that our method generates a respiratory signal that is in phase with the ground truth signal, but suffers from inaccuracies in amplitude close to the anterior-posterior
imaging setup where the primary direction of motion is perpendicular to the image plane. Thus, our method should only be used for phase based retrospective gating.
In successful brain tumor surgery, the neurosurgeon's objectives are threefold: (1) reach the target, (2) remove
it and (3) preserve eloquent tissue surrounding it. Surgical Planning (SP) consists in identifying optimal access
route(s) to the target based on anatomical references and constrained by functional areas. Preoperative
images are essential input in Multi-modal Image Guided NeuroSurgery systems (MIGNS) and update of these
images, with precision and accuracy, is crucial to approach the anatomical reality in the Operating Room (OR).
Intraoperative brain deformation has been previously identified by many research groups and related update
of preoperative images has also been studied. We present a study of three surgical cases with tumors accompanied
with edema and where corticosteroids were administered and monitored during a preoperative stage
[t0, t1 = t0 + 10 days]. In each case we observed a significant change in the Region Of Interest (ROI) and in
anatomical references around it. This preoperative brain shift could induce error for localization during intervention
(time tS) if the SP is based on the t0 preoperative images. We computed volume variation, distance maps based on closest point (CP) for different components of the ROI, and displacement of center of mass (CM) of
the ROI. The matching between sets of homologous landmarks from t0 to t1 was performed by an expert. The
estimation of the landmarks displacement showed significant deformations around the ROI (landmarks shifted
with mean of 3.90 ± 0.92 mm and maximum of 5.45 mm for one case resection). The CM of the ROI moved
about 6.92 mm for one biopsy. Accordingly, there was a sizable difference between SP based at t0vs SP based
at t1, up to 7.95 mm for localization of reference access in one resection case. When compared to the typical
MIGNS system accuracy (2 mm), it is recommended that preoperative images be updated within the interval time [t1,tS] in order to minimize the error correspondence between the anatomical reality and the preoperative data. This should help maximize the accuracy of registration between the preoperative images and the patient in the OR.
It is well established that respiratory motion has significant effects on lung tumor position, and incorporation of this
uncertainty increases the normal lung tissue irradiated. Respiratory correlated CT, which provides three
dimensional image sets for different phases of the breathing cycle, is increasingly being used for radiation therapy
planning. Cone beam CT is being used to obtain cross sectional imaging at the time of therapy for accurate patient
set-up. However, it is not possible to obtain cross sectional respiratory correlated imaging throughout the course of
radiation, leaving residual uncertainties. Recently, implantable passive transponders (Calypso Medical
Technologies) have been developed which are currently FDA-cleared for prostate use only and can be tracked via an
external electromagnetic array in real-time, without the use of ionizing radiation. A visualization system needs to be
developed to quickly and efficiently utilize both the dynamic real-time point measurements with the previously
acquired volumetric data. We have created such a visualization system by incorporating the respiratory correlated
imaging and the individual transponder locations into the Image Guided Surgery Toolkit (IGSTK.org). The tool
already allows quick, qualitative verification of the differences between the measured transponder position and the
imaged position at planning and will support quantitative measurements displaying uncertainty in positioning.
To track respiratory motion during CyberKnife stereotactic radiosurgery in the lung, several (three to five) cylindrical
gold fiducials are implanted near the planned target volume (PTV). Since these fiducials remain in the human body after
treatment, we hypothesize that tracking fiducial movement over time may correlate with the tumor response to treatment
and pulmonary fibrosis, thereby serving as an indicator of treatment success. In this paper, we investigate fiducial
migration in 24 patients through examination of computed tomography (CT) volume images at four time points: pre-treatment,
three, six, and twelve month post-treatment. We developed a MATLAB based GUI environment to display
the images, identify the fiducials, and compute our performance measure. After we semi-automatically segmented and
detected fiducial locations in CT images of the same patient over time, we identified them according to their
configuration and introduced a relative performance measure (ACD: average center distance) to detect their migration.
We found that the migration tended to result in a movement towards the fiducial center of the radiated tissue area
(indicating tumor regression) and may potentially be linked to the patient prognosis.
Author(s): Thomas Boettger; Tufve Nyholm; Magnus Karlsson; Chandrasekhar Nunna; Juan Carlos Celi
We present a system which allows for use of magnetic resonance (MR) images as primary RT workflow modality alone
and no longer limits the user to computed tomography data for radiation therapy (RT) planning, simulation and patient
localization. The single steps for achieving this goal are explained in detail.
For planning two MR data sets, MR1 and MR2 are acquired sequentially. For MR1 a standardized Ultrashort TE (UTE)
sequence is used enhancing bony anatomy. The sequence for MR2 is chosen to get optimal contrast for the target and the
organs at risk for each individual patient. Both images are naturally in registration, neglecting elastic soft tissue
deformations. The planning software first automatically extracts skin and bony anatomy from MR1. The user can semi-automatically
delineate target structures and organs at risk based on MR1 or MR2, associate all segmentations with MR1
and create a plan in the coordinate system of MR1. Projections similar to digitally reconstructed radiographs (DRR)
enhancing bony anatomy are calculated from the MR1 directly and can be used for iso-center definition and setup
verification. Furthermore we present a method for creating a Pseudo-CT data set which assigns electron densities to the
voxels of MR1 based on the skin and bone segmentations. The Pseudo-CT is then used for dose calculation.
Results from first tests under clinical conditions show the feasibility of the completely MR based workflow in RT for
necessary clinical cases. It needs to be investigated in how far geometrical distortions influence accuracy of MR-based
Due to respiratory motion, lung tumor can move up to several centimeters. If respiratory motion is
not carefully considered during the radiation treatment planning, the highly conformal dose
distribution with steep gradients could miss the target. To address this issue, the common strategy is
to add a population-derived safety margin to the gross tumor volume (GTV). However, during a free
breathing CT simulation, the images could be acquired at any phase of a breathing cycle. With such a
generalized uniform margin, the planning target volume (PTV) may either include more normal lung
tissue than required or miss the GTV at certain phases of a breathing cycle. Recently, respiration
correlated CT (4DCT) has been developed and implemented. With 4DCT, it is now possible to trace
the tumor 3D trajectories during a breathing cycle and to define the tumor volume as the union of
these 3D trajectories. The tumor volume defined in this way is called the internal target volume
(ITV). In this study, we introduced a novel parameter, the phase impact factor (PIF), to determine the
optimal CT phase for intensity modulated radiation therapy (IMRT) treatment planning for lung
cancer. A minimum PIF yields a minimum probability for the GTV to move out of the ITV during
the course of an IMRT treatment, providing a minimum probability of a geometric miss. Once the
CT images with the optimal phase were determined, an IMRT plan with three to five co-planner
beams was computed and optimized using the inverse treatment planning technique.
Using 2D-3D registration it is possible to extract the body transformation between the coordinate systems of
X-ray and volumetric CT images. Our initial motivation is the improvement of accuracy of external beam
radiation therapy, an effective method for treating cancer, where CT data play a central role in radiation
treatment planning. Rigid body transformation is used to compute the correct patient setup. The drawback
of such approaches is that the rigidity assumption on the imaged object is not valid for most of the patient
cases, mainly due to respiratory motion. In the present work, we address this limitation by proposing a flexible
framework for deformable 2D-3D registration consisting of a learning phase incorporating 4D CT data sets and
hardware accelerated free form DRR generation, 2D motion computation, and 2D-3D back projection.
In this paper we present an efficient algorithm for the segmentation of the inner and outer boundary of thoratic and abdominal aortic aneurysms (TAA & AAA) in computed tomography angiography (CTA) acquisitions. The aneurysm segmentation includes two steps: first, the inner boundary is segmented based on a grey level model with two thresholds; then, an adapted active contour model approach is applied to the more complicated outer boundary segmentation, with its
initialization based on the available inner boundary segmentation. An opacity image, which aims at enhancing important features while reducing spurious structures, is calculated from the CTA images and employed to guide the deformation of the model. In addition, the active contour model is extended by a constraint force that prevents intersections of the inner and outer boundary and keeps the outer boundary at a distance, given by the thrombus thickness, to the inner
boundary. Based upon the segmentation results, we can measure the aneurysm size at each centerline point on the centerline orthogonal multiplanar reformatting (MPR) plane. Furthermore, a 3D TAA or AAA model is reconstructed from the set of segmented contours, and the presence of endoleaks is detected and highlighted. The implemented method has been evaluated on nine clinical CTA data sets with variations in anatomy and location of the pathology and has
shown promising results.
Rationale and Objective: Due to the limited temporal and spatial resolution, coronary CT angiographic image quality is
not optimal for robust and accurate stenosis quantification, and plaque differentiation and quantification. By combining
the high-resolution IVUS images with CT images, a detailed representation of the coronary arteries can be provided in
the CT images. Methods: The two vessel data sets are matched using three steps. First, vessel segments are matched
using anatomical landmarks. Second, the landmarks are aligned in cross-sectional vessel images. Third, the semi-automatically
detected IVUS lumen contours are matched to the CTA data, using manual interaction and automatic
registration methods. Results: The IVUS-CTA fusion tool facilitates the unique combined view of the high-resolution
IVUS segmentation of the outer vessel wall and lumen-intima transitions on the CT images. The cylindrical projection of
the CMPR image decreases the analysis time with 50 percent. The automatic registration of the cross-vessel views
decreases the analyses time with 85 percent. Conclusions: The fusion of IVUS images and their segmentation results
with coronary CT angiographic images provide a detailed view of the lumen and vessel wall of coronary arteries. The
automatic fusion tool makes such a registration feasible for the development and validation of analysis tools.
Author(s): S. Wörz; H. von Tengg-Kobligk; V. Henninger; D. Böckler; H.-U. Kauczor; K. Rohr
We introduce a new model-based approach for the segmentation and quantification of the aortic arch morphology in 3D CTA images for endovascular aortic repair (EVAR). The approach is based on a 3D analytic intensity model for thick vessels, which is directly fitted to the image. Based on the fitting results we compute the (local) 3D vessel curvature and torsion as well as the relevant lengths not only along the 3D centerline but particularly along the inner and outer contour. These measurements are important for pre-operative planning in EVAR applications. We have successfully applied our approach using ten 3D CTA images and have compared the results with ground truth obtained by a radiologist. It turned out that our approach yields accurate estimation results. We have also performed a comparison with a commercial vascular analysis software.
New advances in catheter technology and remote actuation for minimally invasive procedures are continuously
increasing the demand for better x-ray imaging technology. The new x-ray high-sensitivity Micro-Angiographic
Fluoroscope (HS-MAF) detector offers high resolution and real-time image-guided capabilities which are unique when
compared with commercially available detectors. This detector consists of a 300 μm CsI input phosphor coupled to a
dual stage GEN2 micro-channel plate light image intensifier (LII), followed by minifying fiber-optic taper coupled to a
CCD chip. The HS-MAF detector image array is 1024X1024 pixels, with a 12 bit depth capable of imaging at 30 frames
per second. The detector has a round field of view with 4 cm diameter and 35 microns pixels. The LII has a large
variable gain which allows usage of the detector at very low exposures characteristic of fluoroscopic ranges while
maintaining very good image quality. The custom acquisition program allows real-time image display and data storage.
We designed a set of in-vivo experimental interventions in which placement of specially designed endovascular stents
were evaluated with the new detector and with a standard x-ray image intensifier (XII). Capabilities such fluoroscopy,
angiography and ROI-CT reconstruction using rotational angiography data were implemented and verified. The images
obtained during interventions under radiographic control with the HS-MAF detector were superior to those with the XII.
In general, the device feature markers, the device structures, and the vessel geometry were better identified with the new
detector. High-resolution detectors such as HS-MAF can vastly improve the accuracy of localization and tracking of
devices such stents or catheters.
For optimal diagnosis and treatment of lesions at coronary artery bifurcations using x-ray angiography, it is of utmost
importance to determine proper angiographic viewing angles. Due to the increasing use of CTA as a first line diagnostic
tool, 3D CTA data is more frequently available before x-ray angiographic procedures take place. Motivated by this, we
propose to use available CTA data for the determination of patient specific optimal x-ray viewing angles.
A semi-automatic iterative region growing scheme is developed for the segmentation of the coronary arterial tree. From
the segmented arterial tree, a complete hierarchical surface and centerline representation, including bifurcation points, is
automatically obtained. The optimal viewing angle for a selected bifurcation is determined as the view rendering the
least amount of foreshortening and vessel overlap.
For 83 bifurcation areas, viewing angles were automatically determined. The sensitivity of the method to patient
positioning in the x-ray system was also studied. Next, the automatically determined angels were both quantitatively and
qualitatively compared with angles determined by two experts.
The method was found not to be sensitive to the positioning of the patient in the angiographic x-ray system. In 95% of
the cases our method produced a clinically usable view (mean score of 8.4 out of 10) as compared to 98% for the experts
(mean score of 8.7). Our method produced angiographic views with significantly less foreshortening (mean difference of
10 percentage points) than the angiographic views set by the experts.
Vertebroplasty is a minimally invasive procedure in which bone cement is pumped into a fractured vertebral
body that has been weakened by osteoporosis, long-term steroid use, or cancer. In this therapy, a trocar (large
bore hollow needle) is inserted through the pedicle of the vertebral body which is a narrow passage and requires
great skill on the part of the physician to avoid going outside of the pathway. In clinical practice, this procedure
is typically done using 2D X-ray fluoroscopy. To investigate the feasibility of providing 3D image guidance, we
developed an image-guided system based on electromagnetic tracking and our open source software platform
the Image-Guided Surgery Toolkit (IGSTK). The system includes path planning, interactive 3D navigation, and
dynamic referencing. This paper will describe the system and our initial evaluation.
The current standard of care for patients with spinal disorders involves a thorough clinical history, physical exam, and
imaging studies. Simple radiographs provide a valuable assessment but prove inadequate for surgery planning because
of the complex 3-dimensional anatomy of the spinal column and the close proximity of the neural elements, large blood
vessels, and viscera. Currently, clinicians still use primitive techniques such as paper cutouts, pencils, and markers in an
attempt to analyze and plan surgical procedures. 3D imaging studies are routinely ordered prior to spine surgeries but
are currently limited to generating simple, linear and angular measurements from 2D views orthogonal to the central axis
of the patient. Complex spinal corrections require more accurate and precise calculation of 3D parameters such as
oblique lengths, angles, levers, and pivot points within individual vertebra. We have developed a clinician friendly spine
surgery planning tool which incorporates rapid oblique reformatting of each individual vertebra, followed by interactive
templating for 3D placement of implants. The template placement is guided by the simultaneous representation of
multiple 2D section views from reformatted orthogonal views and a 3D rendering of individual or multiple vertebrae
enabling superimposition of virtual implants. These tools run efficiently on desktop PCs typically found in clinician
offices or workrooms. A preliminary study conducted with Mayo Clinic spine surgeons using several actual cases
suggests significantly improved accuracy of pre-operative measurements and implant localization, which is expected to
increase spinal procedure efficiency and safety, and reduce time and cost of the operation.
In this paper, we propose a new method for 2D/3D registration and report its experimental results. The method employs
the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm to search for an optimal transformation that
aligns the 2D and 3D data. The similarity calculation is based on Digitally Reconstructed Radiographs (DRRs), which
are dynamically generated from the 3D data using a hardware-accelerated technique - Adaptive Slice Geometry Texture
Mapping (ASGTM). Three bone phantoms of different sizes and shapes were used to test our method: a long femur, a
large pelvis, and a small scaphoid. A collection of experiments were performed to register CT to fluoroscope and DRRs
of these phantoms using the proposed method and two prior work, i.e. our previously proposed Unscented Kalman Filter
(UKF) based method and a commonly used simplex-based method. The experimental results showed that: 1) with
slightly more computation overhead, the proposed method was significantly more robust to local minima than the
simplex-based method; 2) while as robust as the UKF-based method in terms of capture range, the new method was not
sensitive to the initial values of its exposed control parameters, and has also no special requirement about the cost
function; 3) the proposed method was fast and consistently achieved the best accuracies in all compared methods.
Knowledge of the acetabular rim and surface can be invaluable for hip surgery planning and dysplasia evaluation. The acetabular rim can also be used as a landmark for registration purposes. At the present time acetabular features are mostly extracted manually at great cost of time and human labor. Using a recent level set algorithm that can evolve on the surface of a 3D object represented by a triangular mesh we automatically extracted rims and surfaces of acetabulae.
The level set is guided by curvature features on the mesh. It can segment portions of a surface that are bounded by a line of extremal curvature (ridgeline or crestline). The rim of the acetabulum is such an extremal curvature line. Our material consists of eight hemi-pelvis surfaces. The algorithm is initiated by putting a small circle (level set seed) at the center of the acetabular surface. Because this surface distinctively has the form of a cup we were able to use the Shape Index feature to automatically extract an approximate center. The circle then expands and deforms so as to take the shape of the acetabular rim. The results were visually inspected. Only minor errors were detected. The algorithm also proved to be robust. Seed placement was satisfactory for the eight hemi-pelvis surfaces without changing any parameters. For the level set evolution we were able to use a single set of parameters for seven out of eight surfaces.
Currently available low cost haptic devices allow inexpensive surgical training with no risk to patients. Major
drawbacks of lower cost devices include limited maximum feedback force and the incapability to expose occurring
moments. Aim of this work was the design and implementation of a surgical simulator that allows the evaluation
of multi-sensory stimuli in order to overcome the occurring drawbacks.
The simulator was built following a modular architecture to allow flexible combinations and thorough evaluation
of different multi-sensory feedback modules. A Kirschner-Wire (K-Wire) tibial fracture fixation procedure
was defined and implemented as a first test scenario. A set of computational metrics has been derived from the
clinical requirements of the task to objectively assess the trainees performance during simulation.
Sensory feedback modules for haptic and visual feedback have been developed, each in a basic and additionally
in an enhanced form. First tests have shown that specific visual concepts can overcome some of the drawbacks
coming along with low cost haptic devices. The simulator, the metrics and the surgery scenario together represent
an important step towards a better understanding of the perception of multi-sensory feedback in complex surgical
training tasks. Field studies on top of the architecture can open the way to risk-less and inexpensive surgical
simulations that can keep up with traditional surgical training.
The diagnosis of CSF leak using MR images alone is difficult due to the inherently poor bony information on MR
images. While CT images show bones exquisitely, they lack the soft tissue contrast that is important for detecting CSF
leak. For these reasons, CT cisternography has been the preferred modality for CSF leak diagnosis despite its
invasiveness. We propose a method to fuse the CT and MR images to combine the complementary information from
each modality, which we believe will help with the diagnosis and surgical planning for patients with CSF leak, and
potentially reduce/replace the use of CT cisternography. In the first step, the user identifies three roughly corresponding
points on both the CT and MR images. A GUI was designed that allows the user to quickly navigate through the images
by reslicing the volumes interactively. After finding the CT and MR slices at approximately the same anatomical
position, the user places three markers to represent the same spatial location. In the second step, a generalized Procrustes
transform is used to compute an initial transformation that aligns the CT and MR, which is then optimized using mutual
information maximization. The CT is registered with the MR using the optimal transformation found, and the bony
masks determined from thresholding CT intensity are blended with MR images. Initial results suggest that CT/MR
fusion images are superior to unprocessed CT and MR images in diagnosing CSF leak, and a formal clinical evaluation
is being planned to assess the efficacy of fusion images.
A hybrid X-ray and magnetic resonance imaging system (XMR) has been proposed as an interventional guidance for
cardiovascular catheterisation procedure. However, very few hospitals can benefit from the XMR system because of its
limited availability. In this paper we describe a new guidance strategy for cardiovascular catheterisation procedure. In
our technique, intra-operative patient position is estimated by using a chest surface reconstructed from a
photogrammetry system. The chest surface is then registered with the same surface derived from pre-procedure magnetic
resonance (MR) images. The catheterisation procedure can therefore be guided by a roadmap derived from the MR
images. Patients were required to hold the breath at end expiration during MRI acquisition. The surface matching
accuracy is improved by using a robust trimmed iterative closest point (ICP) matching algorithm, which is especially
designed for incomplete surface matching. Compared to the XMR system, the proposed guidance strategy is low cost
and easy to set up. Experimental data were acquired from 6 volunteers and 1 patient. The patient data were collected
during an electrophysiology procedure. In 6 out of 7 subjects, the experimental results show our method is accurate in
term of reciprocal residual error (range from 1.66m to 3.75mm) and constant (closed-loop TREs range from 1.49mm to
3.55mm). For one subject, trimmed ICP failed to find the optimal transform matrix (residual = 4.89, TRE = 9.32) due to
the poor quality of the photogrammetry-reconstructed surface. More studies are being carried on in clinical trials.
This paper presents the evaluation of the use of multimodality skin markers for the registration of cardiac magnetic
resonance (MR) image data to x-ray fluoroscopy data for the guidance of cardiac electrophysiology procedures. The
approach was validated using a phantom study and 3 patients undergoing pulmonary vein (PV) isolation for the treatment
of paroxysmal atrial fibrillation. In the patient study, skin markers were affixed to the patients' chest and used to register
pre-procedure cardiac MR image data to intra-procedure fluoroscopy data. Registration errors were assessed using
contrast angiograms of the left atrium that were available in 2 out of 3 cases. A clinical expert generated "gold standard"
registrations by adjusting the registration manually. Target registration errors (TREs) were computed using points on the
PV ostia. Ablation locations were computed using biplane x-ray imaging. Registration errors were further assessed by
computing the distances of the ablation points to the registered left atrial surface for all 3 patients. The TREs were 6.0 &
3.1mm for patients 1 & 2. The mean ablation point errors were 6.2, 3.8, & 3.0mm for patients 1, 2, & 3. These results are
encouraging in the context of a 5mm clinical accuracy requirement for this type of procedure. We conclude that
multimodality skin markers have the potential to provide anatomical image integration for x-ray guided cardiac
electrophysiology procedures, especially if coupled with an accurate respiratory motion compensation strategy.
Knowledge of patient-specific cardiac anatomy is required for catheter-based ablation in epicardial ablation
procedures such as ventricular tachycardia (VT) ablation interventions. In particular, knowledge of
critical structures such as the coronary arteries is essential to avoid collateral damage. In such ablation
procedures, ablation catheters are brought in via minimally-invasive subxiphoid access. The catheter is
then steered to ablation target sites on the left ventricle (LV). During the ablation and catheter navigation
it is of vital importance to avoid damage of coronary structures. Contrast-enhanced rotational X-ray
angiography of the coronary arteries delivers a 3D impression of the anatomy during the time of intervention.
Vessel modeling techniques have been shown to be able to deliver accurate 3D anatomical models
of the coronary arteries. To simplify epicardial navigation and ablation, we propose to overlay coronary
arterial models, derived from rotational X-ray angiography and vessel modeling, onto real-time X-ray
fluoroscopy. In a preclinical animal study, we show that overlay of intra-operatively acquired 3D arterial
models onto X-ray helps to place ablation lesions at a safe distance from coronary structures. Example
ablation lesions have been placed based on the model overlay at reasonable distances between key arterial
vessels and on top of marginal branches.
This paper presents a novel method for the generation of a four-chamber surface model from segmented cardiac
MRI. The method has been tested on 3D short-axis cardiac magnetic resonance images with strongly anisotropic
voxels in the long-axis direction. It provides a smooth triangulated surface mesh that closely follows the endocardium
and epicardium. The surface triangles are close-to-regular and their number can be preset. The input
to the method is the segmentation of each of the four cardiac chambers. The same algorithm is independently
used to generate the surface mesh of the epicardium and of the endocardia of the four cardiac chambers. For
each chamber, a sphere that includes the chamber is centered at its barycenter. A triangulated surface mesh
with almost perfectly regular triangles is constructed on the sphere. Then, the Laplace equation is solved over
the region bounded by the segmented chamber surface and the sphere. Finally, each vertex from the triangulated
mesh on the sphere is mapped from the sphere to the chamber surface by following the gradient flow of
the solution of the Laplace equation. The proposed method was compared to the marching cubes algorithm.
The proposed method provides a smooth mesh of the heart chambers despite the strong voxel anisotropy of the
3D images. This is not the case for the marching cubes algorithm, unless the mesh is significantly smoothed.
However, the smoothing of the mesh shrinks it, which makes it a less accurate representation of the chamber
surface. The second advantage is that the mesh triangles are more regular for the proposed method than for the
marching cubes algorithm. Finally, the proposed method allows for a finer control of the number of triangles
than the marching cubes algorithm.
Author(s): D. A. Herzka; M. S. Kotys; S. Krueger; B. J. Traughber; J. Heroux; A. M. Gharib; J. Ohayon; S. Weiss; R. I. Pettigrew; B. J. Wood
Excitation emission spectroscopy (EES) has been used in the past to characterize many different types of tissue. This
technique uses multiple excitation wavelengths and samples a complete optical spectrum for each, yielding an
excitation-emission matrix (EEM). Upon study of the EEM, it is possible to determine the presence of multiple optical
contrast agents since these dyes can have characteristic spectra that can be separated. Here, we demonstrate EES
specifically designed for use in conjunction with MR. This EES is applied with an in-suite control setup that permits
real-time navigation, utilizing active MR tracking catheters, and providing a platform for MR-guided tissue
characterization. The EES system is used in a demonstration experiment to highlight MR imaging, MR guidance in
conjunction with a catheter-based optical measurement.
Effective minimally invasive treatment of cerebral bifurcation aneurysms is challenging due to the complex and
remote vessel morphology. An evaluation of endovascular treatment in a phantom involving image-guided deployment
of new asymmetric stents consisting of polyurethane patches placed to modify blood flow into the aneurysm is reported.
The 3D lumen-geometry of a patient-specific basilar-artery bifurcation aneurysm was derived from a segmented
computed-tomography dataset. This was used in a stereolithographic rapid-prototyping process to generate a mold
which was then used to create any number of exact wax models. These models in turn were used in a lost-wax technique
to create transparent elastomer patient-specific aneurysm phantoms (PSAP) for evaluating the effectiveness of
asymmetric-stent deployment for flow modification. Flow was studied by recording real-time digitized video images of
optical dye in the PSAP and its feeding vessel. For two asymmetric stent placements: through the basilar into the right-posterior
communicating artery (RPCA) and through the basilar into the left-posterior communicating artery (LPCA),
the greatest deviation of flow streamlines away from the aneurysm occurred for the RPCA stent deployment. Flow was
also substantially affected by variations of inflow angle into the basilar artery, resulting in alternations in washout times
as derived from time-density curves. Evaluation of flow in the PSAPs with real-time optical imaging can be used to
determine new EIGI effectiveness and to validate computational-fluid-dynamic calculations for EIGI-treatment
Image-guided cardiac ablation has the potential to decrease procedure times and improve clinical outcome for
patients with cardiac arrhythmias. There are several proposed methods for integrating patient-specific
anatomy into the cardiac ablation procedure; however, these methods require thorough validation. One of the
primary challenges in validation is determining ground truth as a standard for comparison. Some validation
protocols have been developed for animals models and even in patients; however, these methods can be costly
to implement and may increase the risk to patients. We have developed an approach to building realistic
patient-specific anatomic models at a low-cost in order to validate the guidance procedure without introducing
additional risk to the patients. Using a pre-procedural cardiac computed tomography scan, the blood pool of
the left and right atria of a patient are segmented semi-manually. In addition, several anatomical landmarks
are identified in the image data. The segmented atria and landmarks are converted into a polygonalized
model which is used to build a thin-walled patient-specific blood pool model in a stereo-lithography system.
Thumbscrews are inserted into the model at the landmarks. The entire model is embedded in a platinum
silicone material which has been shown to have tissue-mimicking properties relative to ultrasound. Once the
pliable mold has set, the blood pool model is extracted by dissolving the rigid material. The resulting
physical model correctly mimics a specific patient anatomy with embedded fiducals which can be used for
validation experiments. The patient-specific anatomic model approach may also be used for pre-surgical
practice and training of new interventionalists.
Purpose: Brachytherapy (radioactive seed insertion) has emerged as one of the most effective treatment options
for patients with prostate cancer, with the added benefit of a convenient outpatient procedure. The main
limitation in contemporary brachytherapy is faulty seed placement, predominantly due to the presence of intra-operative
edema (tissue expansion). Though currently not available, the capability to intra-operatively monitor
the seed distribution, can make a significant improvement in cancer control. We present such a system here.
Methods: Intra-operative measurement of edema in prostate brachytherapy requires localization of inserted
radioactive seeds relative to the prostate. Seeds were reconstructed using a typical non-isocentric C-arm, and
exported to a commercial brachytherapy delivery system. Technical obstacles for 3D reconstruction on a non-isocentric
C-arm include pose-dependent C-arm calibration; distortion correction; pose estimation of C-arm
images; seed reconstruction; and C-arm to TRUS registration.
Results: In precision-machined hard phantoms with 40-100 seeds and soft tissue phantoms with 45-87 seeds,
we correctly reconstructed the seed implant shape with an average 3D precision of 0.35 mm and 0.24 mm,
respectively. In a DoD Phase-1 clinical trial on 6 patients with 48-82 planned seeds, we achieved intra-operative
monitoring of seed distribution and dosimetry, correcting for dose inhomogeneities by inserting an average of
4.17 (1-9) additional seeds. Additionally, in each patient, the system automatically detected intra-operative seed
migration induced due to edema (mean 3.84 mm, STD 2.13 mm, Max 16.19 mm).
Conclusions: The proposed system is the first of a kind that makes intra-operative detection of edema (and
subsequent re-optimization) possible on any typical non-isocentric C-arm, at negligible additional cost to the
existing clinical installation. It achieves a significantly more homogeneous seed distribution, and has the potential
to affect a paradigm shift in clinical practice. Large scale studies and commercialization are currently underway.
We compare two optical tracking systems with regard to their suitability for soft tissue navigation with fiducial
needles: The Polaris system with passive markers (Northern Digital Inc. (NDI); Waterloo, Ontario, Canada),
and the MicronTracker 2, model H40 (Claron Technology, Inc.; Toronto, Ontario, Canada). We introduce
appropriate tool designs and assess the tool tip tracking accuracy under typical clinical light conditions in a
sufficiently sized measurement volume. To assess the robustness of the tracking systems, we further evaluate
their sensitivity to illumination conditions as well as to the velocity and the orientation of a tracked tool. While
the Polaris system showed robust tracking accuracy under all conditions, the MicronTracker 2 was highly
sensitive to the examined factors.
Electromagnetic (EM) tracking systems have been successfully used for Surgical Navigation in ENT, cranial, and spine
applications for several years. Catheter sized micro EM sensors have also been used in tightly controlled cardiac
mapping and pulmonary applications. EM systems have the benefit over optical navigation systems of not requiring a
line-of-sight between devices. Ferrous metals or conductive materials that are transient within the EM working volume
may impact tracking performance. Effective methods for detecting and reporting EM field distortions are generally well
known. Distortion compensation can be achieved for objects that have a static spatial relationship to a tracking sensor.
New commercially available micro EM tracking systems offer opportunities for expanded image-guided navigation
procedures. It is important to know and understand how well these systems perform with different surgical tables and
ancillary equipment. By their design and intended use, micro EM sensors will be located at the distal tip of tracked
devices and therefore be in closer proximity to the tables.
Our goal was to define a simple and portable process that could be used to estimate the EM tracker accuracy, and to
vet a large number of popular general surgery and imaging tables that are used in the United States and abroad.
3D ultrasound (US) is a new technology that can be used for a variety of diagnostic applications, such as obstetrical,
vascular, and urological imaging, and has been explored greatly potential in the applications of image-guided surgery
and therapy. Uterine adenoma and uterine bleeding are the two most prevalent diseases in Chinese woman, and a
minimally invasive ablation system using a needle-like RF button electrode is widely used to destroy tumor cells or stop
bleeding. To avoid accidents or death of the patient by inaccurate localizations of the electrode and the tumor position
during treatment, 3D US guidance system was developed. In this paper, a new automated technique, the 3D Improved
Hough Transform (3DIHT) algorithm, which is potentially fast, accurate, and robust to provide needle segmentation in
3D US image for use of 3D US imaging guidance, was presented. Based on the coarse-fine search strategy and a four
parameter representation of lines in 3D space, 3DIHT algorithm can segment needles quickly, accurately and robustly.
The technique was evaluated using the 3D US images acquired by scanning a water phantom. The segmentation position
deviation of the line was less than 2mm and angular deviation was much less than 2°. The average computational time
measured on a Pentium IV 2.80GHz PC computer with a 381×381×250 image was less than 2s.
Lung cancer is the cause of more than 150,000 deaths annually in the United States. Early and accurate detection of lung
tumors with Positron Emission Tomography has enhanced lung tumor diagnosis. However, respiratory motion during the
imaging period of PET results in the reduction of accuracy of detection due to blurring of the images. Chest motion can
serve as a surrogate for tracking the motion of the tumor. For tracking chest motion, an optical laser system was designed
which tracks the motion of a patterned card placed on the chest by illuminating the pattern with two structured light
sources, generating 8 positional markers. The position of markers is used to determine the vertical, translational, and
rotational motion of the card. Information from the markers is used to decide whether the patient's breath is abnormal
compared to their normal breathing pattern. The system is developed with an inexpensive web-camera and two low-cost
laser pointers. The experiments were carried out using a dynamic phantom developed in-house, to simulate chest
movement with different amplitudes and breathing periods. Motion of the phantom was tracked by the system developed
and also by a pressure transducer for comparison. The studies showed a correlation of 96.6% between the respiratory
tracking waveforms by the two systems, demonstrating the capability of the system. Unlike the pressure transducer
method, the new system tracks motion in 3 dimensions. The developed system also demonstrates the ability to track a
sliding motion of the patient in the direction parallel to the bed and provides the potential to stop the PET scan in case of
The purpose of this study was to examine the effects of different sensor orientation on the positional accuracy of an AC
electromagnetic tracking system, the second generation NDI Aurora, within a CT scanner environment. A three-axis
positioning robot was used to move three electromagnetically tracked needles above the CT table throughout a 30cm by
30cm by 30cm volume sampled in 2.5cm steps. All three needle tips were held within 2mm of each other, with the
needle axes orthogonally located in the +x, +y, and +z directions of the Aurora coordinate system. The corresponding
position data was captured from the Aurora for each needle and was registered to the positioning system data using a
rigid body transformation minimizing the least squares L2-norm. For all three needle orientations the largest errors were
observed farthest from the field generator and closest to the CT table. However, the 3D distortion error patterns were
different for each needle, demonstrating that the sensor orientation has an effect on the positional measurement of the
sensor. This suggests that the effectiveness of using arrays of reference sensors to model and correct for metal distortions
may depend strongly on the orientation of the reference sensors in relation to the orientation of the tracked device. In an
ideal situation, the reference sensors should be oriented in the same direction as the tracked needle.
There is an increasing interest in using MR imaging as a means of guiding endovascular procedures due to MR's
unparalleled soft tissue characterization capabilities and its ability to assess functional parameters such as blood flow and
tissue perfusion. In order to evaluate the potential safety risk of catheter heating, we performed in vitro testing where we
measured heat deposition in sample non-ferrous 5F catheters ranging in length from 80cm - 110cm within a gel
phantom. To identify the conditions for maximum heat deposition adjacent to catheters, we measured (1) the effect of
variable immersed lengths, (2) the effect of variable SAR, and (3) whether heating varied along the catheter shaft. Net
temperature rise per scan and initial rate of temperature rise were determined for all configurations. The temperature
recordings clearly and consistently demonstrated the correlations between MR scanning under the three variable
conditions and heat deposition. Our overall maximum heating condition, which combined the maximum heating
conditions of all three variables, was modest (<2°C/min), but well above the temperature response of the gel well away
from the catheter. Reduced SAR acquisitions effectively limited these temperature rises, and RF exposure levels of
0.2W/kg produced little detectible temperature change over the 2 minute MR acquisitions studied here. A combination
of SAR limits and imaging duty cycle restrictions appear to be sufficient to permit MR imaging in catheterized patients
without concern for thermal injury.
Prostate biopsy procedures are generally limited to 2D transrectal ultrasound (TRUS) imaging for biopsy needle
guidance. This limitation results in needle position ambiguity and an insufficient record of biopsy core locations in cases
of prostate re-biopsy. We have developed a multi-jointed mechanical device that supports a commercially available
TRUS probe with an integrated needle guide for precision prostate biopsy. The device is fixed at the base, allowing the
joints to be manually manipulated while fully supporting its weight throughout its full range of motion. Means are
provided to track the needle trajectory and display this trajectory on a corresponding TRUS image. This allows the
physician to aim the needle-guide at predefined targets within the prostate, providing true 3D navigation. The tracker has
been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe to
generate 3D images. The tracker reduces the variability associated with conventional hand-held probes, while preserving
user familiarity and procedural workflow. In a prostate phantom, biopsy needles were guided to within 2 mm of their
targets, and the 3D location of the biopsy core was accurate to within 3 mm. The 3D navigation system is validated in
the presence of prostate motion in a preliminary patient study.
We have developed an image-guided navigation system using electromagnetically-tracked tools, with potential
applications for abdominal procedures such as biopsies, radiofrequency ablations, and radioactive seed placements. We
present the results of two phantom studies using our navigation system in a clinical environment. In the first study, a
physician and medical resident performed a total of 18 targeting passes in the abdomen of an anthropomorphic phantom
based solely upon image guidance. The distance between the target and needle tip location was measured based on
confirmatory scans which gave an average of 3.56 mm. In the second study, three foam nodules were placed at different
depths in a gelatin phantom. Ten targeting passes were attempted in each of the three depths. Final distances between the
target and needle tip were measured which gave an average of 3.00 mm. In addition to these targeting studies, we discuss
our refinement to the standard four-quadrant image-guided navigation user interface, based on clinician preferences. We
believe these refinements increase the usability of our system while decreasing targeting error.
The goal of this project is to develop a robotic system to assist the physician in minimally invasive ultrasound
interventions. In current practice, the physician must manually hold the ultrasound probe in one hand and manipulate the
needle with the other hand, which can be challenging, particularly when trying to target small lesions. To assist the
physician, the robot should not only be capable of providing the spatial movement needed, but also be able to control the
contact force between the ultrasound probe and patient. To meet these requirements, we are developing a prototype
system based on a six degree of freedom parallel robot. The system will provide high bandwidth, precision motion, and
force control. In this paper we report on our progress to date, including the development of a PC-based control system
and the results of our initial experiments.
Dental implant is one of the most popular methods of tooth root replacement used in prosthetic dentistry. Computerize
navigation system on a pre-surgical plan is offered to minimize potential risk of damage to critical anatomic structures of
patients. Dental tool tip calibrating is basically an important procedure of intraoperative surgery to determine the relation
between the hand-piece tool tip and hand-piece's markers. With the transferring coordinates from preoperative CT data
to reality, this parameter is a part of components in typical registration problem. It is a part of navigation system which
will be developed for further integration. A high accuracy is required, and this relation is arranged by point-cloud-to-point-cloud rigid transformations and singular value decomposition (SVD) for minimizing rigid registration errors. In earlier studies, commercial surgical navigation systems from, such as, BrainLAB and Materialize, have flexibility problem on tool tip calibration. Their systems either require a special tool tip calibration device or are unable to change the different tool. The proposed procedure is to use the pointing device or hand-piece to touch on the pivot and the transformation matrix. This matrix is calculated every time when it moves to the new position while the tool tip stays at the same point. The experiment acquired on the information of tracking device, image acquisition and image processing algorithms. The key success is that point-to-point-cloud requires only 3 post images of tool to be able to converge to the minimum errors 0.77%, and the obtained result is correct in using the tool holder to track the path simulation line displayed in graphic animation.
This paper presents a real-time, freehand ultrasound (US) calibration system, with automatic accuracy control
and incorporation of US section thickness. Intended for operating-room usage, the system featured a fully
automated calibration method that requires minimal human interaction, and an automatic accuracy control
mechanism based on a set of ground-truth data. We have also developed a technique to quantitatively evaluate
and incorporate US section thickness to improve the calibration precision. The experimental results demonstrated
that the calibration system was able to consistently and robustly achieve high calibration accuracy with real-time
performance and efficiency. Further, our preliminary results to incorporate elevation beam profile have
demonstrated a promising reduction of uncertainties to estimate elevation-related parameters.
When choosing an Electromagnetic Tracking System (EMTS) for image-guided procedures, it is desirable for the
system to be usable for different procedures and environments. Several factors influence this choice. To date,
the only factors that have been studied extensively, are the accuracy and the susceptibility of electromagnetic
tracking systems to distortions caused by ferromagnetic materials. In this paper we provide a holistic overview
of the factors that should be taken into account when choosing an EMTS. These factors include: the system's
refresh rate, the number of sensors that need to be tracked, the size of the navigated region, system interaction
with the environment, can the sensors be embedded into the tools and provide the desired transformation
data, and tracking accuracy and robustness. We evaluate the Aurora EMTS (Northern Digital Inc., Waterloo,
Ontario, Canada) and the 3D Guidance EMTS with the flat-panel and the short-range field generators (Ascension
Technology Corp., Burlington, Vermont, USA) in three clinical environments. We show that these systems are
applicable to specific procedures or in specific environments, but that, no single system is currently optimal for
all environments and procedures we evaluated.
The aim of this study is a defined, visually based and camera controlled bone removal by a navigated CO2 laser on the
promontory of the inner ear. A precise and minimally traumatic opening procedure of the cochlea for the implantation of
a cochlear implant electrode (so-called cochleostomy) is intended. Harming the membrane linings of the inner ear can
result in damage of remaining organ functions (e.g. complete deafness or vertigo). A precise tissue removal by a laser-based
bone ablation system is investigated. Inside the borehole the pulsed laser beam is guided automatically over the
bone by using a two mirror galvanometric scanner. The ablation process is controlled by visual servoing. For the
detection of the boundary layers of the inner ear the ablation area is monitored by a color camera. The acquired pictures
are analyzed by image processing. The results of this analysis are used to control the process of laser ablation. This
publication describes the complete system including image processing algorithms and the concept for the resulting
distribution of single laser pulses. The system has been tested on human cochleae in ex-vivo studies. Further
developments could lead to safe intraoperative openings of the cochlea by a robot based surgical laser instrument.
Performing regular mammographic screening and comparing corresponding mammograms taken from multiple
views or at different times are necessary for early detection and treatment evaluation of breast cancer, which is
key to successful treatment. However, mammograms taken at different times are often obtained under different
compression, orientation, or body position. A temporal pair of mammograms may vary significantly due to the
spatial disparities caused by the variety in acquisition environments, including 3D position of the breast, the
amount of pressure applied, etc. Such disparities can be corrected through the process of temporal registration.
We propose to use a 3D finite element model for temporal registration of digital mammography. In this paper,
we apply patient specific 3D breast model constructed from MRI data of the patient, for cases where lesions are
detectable in multiple mammographic views across time. The 3D location of the lesion in the breast model is
computed through a breast deformation simulation step presented in our earlier work. Lesion correspondence
is established by using a nearest neighbor approach in the uncompressed breast volume. Our experiments show
that the use of a 3D finite element model for simulating and analyzing breast deformation contributes to good
accuracy when matching suspicious regions in temporal mammograms.
From epidemiological studies, it has been shown that 0.2% of men and 0.1% of women suffer from a degree of
atrioventricular (AV) block. In recent years, the palliative treatment for third degree AV block has included Cardiac
Resynchronization Therapy (CRT). It was found that patients show more clinical improvement in the long term with
CRT compared with single chamber devices. Still, an important group of patients does not improve their hemodynamic
function as much as could be expected. A better understanding of the basis for optimizing the devices settings (among
which the VV delay) will help to increase the number of responders. In this work, a finite element model of the left and
right ventricles was generated using an atlas-based approach for their segmentation, which includes fiber orientation. The
electrical activity was simulated with the electrophysiological solver CARP, using the Ten Tusscher et al. ionic model
for the myocardium, and the DiFrancesco-Noble for Purkinje fibers. The model is representative of a patient without
dilated or ischemic cardiomyopathy. The simulation results were analyzed for total activation times and latest activated
regions at different VV delays and pre-activations (RV pre-activated, LV pre-activated). To optimize the solution,
simulations are compared against the His-Purkinje network activation (normal physiological conduction), and
interventricular septum activation (as collision point for the two wave fronts). The results were analyzed using Pearson's
coefficient of correlation for point to point comparisons between simulation cases. The results of this study contribute to
gain insight on the VV delay and how its adjustment might influence response to CRT and how it can be used to
optimize the treatment.
Intraoperative ultrasound (iUS) has emerged as a practical neuronavigational tool for brain shift compensation in image-guided
tumor resection surgeries. The use of iUS is optimized when coregistered with preoperative magnetic resonance
images (pMR) of the patient's head. However, the fiducial-based registration alone does not necessarily optimize the
alignment of internal anatomical structures deep in the brain (e.g., tumor) between iUS and pMR. In this paper, we
investigated and evaluated an image-based re-registration scheme to maximize the normalized mutual information (nMI)
between iUS and pMR to improve tumor boundary alignment using the fiducial registration as a starting point for
optimization. We show that this scheme significantly (p<<0.001) reduces tumor boundary misalignment pre-durotomy.
The same technique was employed to measure tumor displacement post-durotomy, and the locally measured tumor
displacement was assimilated into a biomechanical model to estimate whole-brain deformation. Our results demonstrate
that the nMI re-registration pre-durotomy is critical for obtaining accurate measurement of tumor displacement, which
significantly improved model response at the craniotomy when compared with stereopsis data acquired independently
from the tumor registration. This automatic and computationally efficient (<2min) re-registration technique is feasible
for routine clinical use in the operating room (OR).
The aim of this work is to provide a simulation framework for generation of synthetic tomosynthesis images to
be used for evaluation of future developments in the field of tomosynthesis. An anthropomorphic software tissue
phantom was previously used in a number of applications for evaluation of acquisition modalities and image
post-processing algorithms for mammograms. This software phantom has been extended for similar use with
tomosynthesis. The new features of the simulation framework include a finite element deformation model to
obtain realistic mammographic deformation and projection simulation for a variety of tomosynthesis geometries.
The resulting projections are provided in DICOM format to be applicable for clinically applied reconstruction
algorithms. Examples of simulations using parameters of a currently applied clinical setup are presented. The
overall simulation model is generic, allowing multiple degrees of freedom to cover anatomical variety in the amount
of glandular tissue, degrees of compression, material models for breast tissues, and tomosynthesis geometries.
This paper contributes to modeling, simulation and visualization of peripheral nerve cords. Until now, only
sparse datasets of nerve cords can be found. In addition, this data has not yet been used in simulators, because
it is only static. To build up a more flexible anatomical structure of peripheral nerve cords, we propose a
hierarchical tree data structure where each node represents a nerve branch. The shape of the nerve segments
itself is approximated by spline curves. Interactive modeling allows for the creation and editing of control
points which are used for branching nerve sections, calculating spline curves and editing spline representations
via cross sections. Furthermore, the control points can be attached to different anatomic structures. Through
this approach, nerve cords deform in accordance to the movement of the connected structures, e.g., muscles or
bones. As a result, we have developed an intuitive modeling system that runs on desktop computers and in
immersive environments. It allows anatomical experts to create movable peripheral nerve cords for articulated
virtual humanoids. Direct feedback of changes induced by movement or deformation is achieved by visualization
in real-time. The techniques and the resulting data are already used for medical simulators.
We propose a fast stereo matching algorithm for 3D reconstruction of internal organs using a stereoscopic laparoscope.
Stoyanov et al. have proposed a technique for recovering the 3D depth of internal organs from images taken by a stereoscopic laparoscope. In their technique, the dense stereo correspondence is solved by registration of the entire image. However, the computational cost is very high because registration of the entire image requires multidimensional optimization. In this paper, we propose a new algorithm based on a local area registration method that requires only low-dimensional optimization for reduction of computational cost. We evaluated the computational cost of the proposed algorithm using a stereoscopic laparoscope. We also evaluated the accuracy of the proposed algorithm using three types of images of abdominal models taken by a 3D laser scanner. In the matching step, the size of the template used to calculate the correlation coefficient, on which the computational cost is strongly dependent, was reduced by a factor of 16 as compared with the conventional algorithm. On the other hand, the average depth errors were 4.68 mm, 7.18 mm, and 7.44 mm respectively, and accuracy was approximately as same as the conventional algorithm.
In the past years different models have been formulated to explain the growth of gliomas in the brain. The most
accepted model is based on a reaction-diffusion equation that describes the growth of the tumor as two separate
components- a proliferative component and an invasive component. While many improvements have been made to this
basic model, the work exploring the factors that naturally inhibit growth is insufficient. It is known that stress fields
affect the growth of normal tissue. Due to the rigid skull surrounding the brain, mechanical stress might be an important
factor in inhibiting the growth of gliomas. A realistic model of glioma growth would have to take that inhibitory effect
into account. In this work a mathematical model based on the reaction-diffusion equation was used to describe tumor
growth, and the affect of mechanical stresses caused by the mass effect of tumor cells was studied. An initial tumor cell
concentration with a Gaussian distribution was assumed and tumor growth was simulated for two cases- one where
growth was solely governed by the reaction-diffusion equation and second where mechanical stress inhibits growth by
affecting the diffusivity. All the simulations were performed using the finite difference method. The results of
simulations show that the proposed mechanism of inhibition could have a significant affect on tumor growth predictions.
This could have implications for varied applications in the imaging field that use growth models, such as registration and
model updated surgery.
With a steadily increasing indication, regional anesthesia is still trained directly on the patient. To develop
a virtual reality (VR)-based simulation, a patient model is needed containing several tissues, which have to
be extracted from individual magnet resonance imaging (MRI) volume datasets. Due to the given modality
and the different characteristics of the single tissues, an adequate segmentation can only be achieved by using
a combination of segmentation algorithms. In this paper, we present a framework for creating an individual
model from MRI scans of the patient. Our work splits in two parts. At first, an easy-to-use and extensible tool
for handling the segmentation task on arbitrary datasets is provided. The key idea is to let the user create a
segmentation for the given subject by running different processing steps in a purposive order and store them in a
segmentation script for reuse on new datasets. For data handling and visualization, we utilize the Medical Imaging
Interaction Toolkit (MITK), which is based on the Visualization Toolkit (VTK) and the Insight Segmentation
and Registration Toolkit (ITK). The second part is to find suitable segmentation algorithms and respectively
parameters for differentiating the tissues required by the RA simulation. For this purpose, a fuzzy c-means
clustering algorithm combined with mathematical morphology operators and a geometric active contour-based
approach is chosen. The segmentation process itself aims at operating with minimal user interaction, and the
gained model fits the requirements of the simulation. First results are shown for both, male and female MRI of
Motion artifacts have always been a non-desired effect in the field of Medical Imaging. Thus new technologies are
being investigated to ameliorate the damaging effects of image blurring caused by motion. The development of these
new technologies requires the use of phantoms as a means of precise, repeatable and controllable source of motion for
testing initial algorithms and prototypes. The objective of this project was to design a dynamic lung tumor phantom
coupled with chest motion. The phantom consists of a pair of linear actuators. The complete design, excluding the
actuators was built in house out of acrylic materials with low attenuation factors, making it ideal for PET studies. The
linear actuator is a stepper motor coupled to a lead screw which translates rotational motion into linear displacement at a
rate of 0.0254 mm/step. The system is driven by a PIC microcontroller that allows the user to select different tumor
motion parameters, and is capable of performing 3D motion. The phantom is capable of providing lung tumor and chest
position with an accuracy of 1.3 μm in the axis of motion, with a displacement of up to 52 mm and maximum velocity
of 21.59 mm/second. The design has proven to be suitable for simulating lung tumor motion in PET studies, as well as
testing motion tracking algorithms. However it can also be used in studies dealing with gated radiotherapy.
Deep brain structures are frequently used as targets in neurosurgical procedures. However, the boundaries of these
structures are often not visible in clinically used MR and CT images. Techniques based on anatomical atlases and
indirect targeting are used to infer the location of these targets intraoperatively. Initial errors of such approaches may be
up to a few millimeters, which is not negligible. E.g. subthalamic nucleus is approximately 4x6 mm in the axial plane
and the diameter of globus pallidus internus is approximately 8 mm, both of which are used as targets in deep brain
stimulation surgery. To increase the initial localization accuracy of deep brain structures we have developed an atlas-based
segmentation method that can be used for the surgery planning. The atlas is a high resolution MR head scan of a
healthy volunteer with nine deep brain structures manually segmented. The quality of the atlas image allowed for the
segmentation of the deep brain structures, which is not possible from the clinical MR head scans of patients. The subject
image is non-rigidly registered to the atlas image using thin plate splines to represent the transformation and normalized
mutual information as a similarity measure. The obtained transformation is used to map the segmented structures from
the atlas to the subject image. We tested the approach on five subjects. The quality of the atlas-based segmentation was
evaluated by visual inspection of the third and lateral ventricles, putamena, and caudate nuclei, which are visible in the
subject MR images. The agreement of these structures for the five tested subjects was approximately 1 to 2 mm.
In image-guided bone surgery, sample points collected from the surface of the bone are registered to the preoperative
CT model using well-known registration methods such as Iterative Closest Point (ICP). These techniques
are generally very sensitive to the initial alignment of the datasets. Poor initialization significantly increases
the chances of getting trapped local minima. In order to reduce the risk of local minima, the registration
is manually initialized by locating the sample points close to the corresponding points on the CT model.
In this paper, we present an automatic initialization method that aligns the sample points collected from the
surface of pelvis with CT model of the pelvis. The main idea is to exploit a mean shape of pelvis created from a
large number of CT scans as the prior knowledge to guide the initial alignment. The mean shape is constant for
all registrations and facilitates the inclusion of application-specific information into the registration process. The
CT model is first aligned with the mean shape using the bilateral symmetry of the pelvis and the similarity of
multiple projections. The surface points collected using ultrasound are then aligned with the pelvis mean shape.
This will, in turn, lead to initial alignment of the sample points with the CT model. The experiments using a
dry pelvis and two cadavers show that the method can align the randomly dislocated datasets close enough for
successful registration. The standard ICP has been used for final registration of datasets.
In this work, we describe and evaluate a semi-automatic method for liver segmentation in CT images using a
3D interface with haptic feedback and stereo graphics. Recently, we reported our fast semi-automatic method
using fast marching segmentation. Four users performed initialization of the method for 52 datasets by manually
drawing seed-regions directly in 3D using the haptic interface. Here, we evaluate our segmentation method
by computing accuracy based on newly obtained manual delineations by two radiologists for 23 datasets. We
also show that by performing subsequent segmentation with an interactive deformable model, we can increase
segmentation accuracy. Our method shows high reproducibility compared to manual delineation. The mean
precision for the manual delineation is 89%, while it is 97% for the fast marching method. With the subsequent
deformable mesh segmentation, we obtain a mean precision of 98%. To assess accuracy, we construct a fuzzy
ground truth by averaging the manual delineations. The mean sensitivity for the fast marching segmentation is
93% and the specificity is close to 100%. When we apply deformable model segmentation, we obtain a sensitivity
increase of three percentage points while the high specificity is maintained. The mean interaction time for the
deformable model segmentation is 1.5 minutes.
We present a fully 3D liver segmentation method where high accuracy and precision is efficiently obtained
via haptic interaction in a 3D user interface. Our method makes it possible to avoid time-consuming manual
delineation, which otherwise is a common option prior to, e.g., hepatic surgery planning.
Endoscopic needle biopsy requires off-line 3D computed-tomography (CT) chest image analysis to plan a biopsy site followed by live endoscopy to perform the biopsy. We present a method for continuous image-based endoscopic guidance that interleaves periodic normalized-mutual-information-based CT-video registration with optical-flow-based endoscopic video motion tracking. The method operates at a near real-time rate and was successfully tested on endoscopic video sequences for phantom and human lung-cancer cases. We also illustrate its use when incorporated into a complete system for image-based planning and guidance of endoscopy.
In the current clinical workflow of minimally invasive aortic procedures navigation tasks are performed under 2D
or 3D angiographic imaging. Many solutions for navigation enhancement suggest an integration of the preoperatively
acquired computed tomography angiography (CTA) in order to provide the physician with more image
information and reduce contrast injection and radiation exposure. This requires exact registration algorithms
that align the CTA volume to the intraoperative 2D or 3D images. Additional to the real-time constraint, the registration
accuracy should be independent of image dissimilarities due to varying presence of medical instruments
and contrast agent. In this paper, we propose efficient solutions for image-based 2D-3D and 3D-3D registration
that reduce the dissimilarities by image preprocessing, e.g. implicit detection and segmentation, and adaptive
weights introduced into the registration procedure. Experiments and evaluations are conducted on real patient
Presentation of detailed anatomical structures via 3D Computed Tomographic (CT) volumes helps visualization and
navigation in electrophysiology procedures (EP). Registration of the CT volume with the online fluoroscopy however is
a challenging task for EP applications due to the lack of discernable features in fluoroscopic images. In this paper, we
propose to use the coronary sinus (CS) catheter in bi-plane fluoroscopic images and the coronary sinus in the CT volume
as a location constraint to accomplish 2D-3D registration. Two automatic registration algorithms are proposed in this
study, and their performances are investigated on both simulated and real data. It is shown that compared to registration
using mono-plane fluoroscopy, registration using bi-plane images results in substantially higher accuracy in 3D and
enhanced robustness. In addition, compared to registering the projection of CS to the 2D CS catheter, it is more desirable
to reconstruct a 3D CS catheter from the bi-plane fluoroscopy and then perform a 3D-3D registration between the CS
and the reconstructed CS catheter. Quantitative validation based on simulation and visual inspection on real data
demonstrates the feasibility of the proposed workflow in EP procedures.
Segmentation of the left atrium is vital for pre-operative assessment of its anatomy in radio-frequency
catheter ablation (RFCA) surgery. RFCA is commonly used for treating atrial fibrillation. In this paper we
present an semi-automatic approach for segmenting the left atrium and the pulmonary veins from MR
angiography (MRA) data sets. We also present an automatic approach for further subdividing the
segmented atrium into the atrium body and the pulmonary veins. The segmentation algorithm is based on
the notion that in MRA the atrium becomes connected to surrounding structures via partial volume affected
voxels and narrow vessels, the atrium can be separated if these regions are characterized and identified. The
blood pool, obtained by subtracting the pre- and post-contrast scans, is first segmented using a region-growing
approach. The segmented blood pool is then subdivided into disjoint subdivisions based on its
Euclidean distance transform. These subdivisions are then merged automatically starting from a seed point
and stopping at points where the atrium leaks into a neighbouring structure. The resulting merged
subdivisions produce the segmented atrium. Measuring the size of the pulmonary vein ostium is vital for
selecting the optimal Lasso catheter diameter. We present a second technique for automatically identifying
the atrium body from segmented left atrium images. The separating surface between the atrium body and
the pulmonary veins gives the ostia locations and can play an important role in measuring their diameters.
The technique relies on evolving interfaces modelled using level sets. Results have been presented on 20
patient MRA datasets.
Author(s): Yulin Song; Boris Mueller M.D.; Maria F. Chan; Sang E. Sim M.D.; Borys Mychalczak M.D.; Xiaolei Huang
Prostate cancer is the most common tumor site treated with intensity modulated radiation therapy
(IMRT). However, due to patient and organ motions, treatment-induced physiological changes, and
different daily filling in the bladder and rectum, the position of the prostate in relation to the fixed
pelvic bone can change significantly. Without a reliable guiding technique, this could result in
underdosing the target and overdosing the critical organs. Therefore, image-guided localization of
the prostate must be performed prior to each treatment, which led to the development of a new
radiation treatment modality, the image-guided radiation therapy (IGRT). One form of IGRT is to
implant three gold seed markers into the prostate gland to serve as a fixed reference system. Daily
patient setup verification is performed by using the gold seed markers-based image registration
rather than the commonly used bony landmarks-based approach. In this paper, we present an
efficient and automated method for registering digitally reconstructed radiographs (DRR) and kV X-ray
images of the prostate with high accuracy using a hybrid method. Our technique relies on both
internal fiducial markers (i.e. gold seed markers) implanted into the prostate and a robust, hybrid 2D
registration method using a salient-region based image registration technique. The registration
procedure consists of several novel steps. Validation experiments were performed to register DRR
and kV X-ray images in anterior-posterior (AP) or lateral views and the results were reviewed by
experienced radiation oncology physicists.
In the past few years, fiber clustering algorithms have shown to be a very powerful tool for grouping white matter
connections tracked in DTI images into anatomically meaningful bundles. They improve the visualization and
perception, and could enable robust quantification and comparison between individuals. However, most existing
techniques perform a coarse approximation of the fibers due to the high complexity of the underlying clustering
problem or do not allow for an efficient clustering in real time. In this paper, we introduce new algorithms
and data structures which overcome both problems. The fibers are represented very precisely and efficiently
by parameterized polynomials defining the x-, y-, and z-component individually. A two-step clustering method
determines possible clusters having a Gaussian distributed structure within one component and, afterwards,
verifies their existences by principal component analysis (PCA) with respect to the other two components. As
the PCA has to be performed only n times for a constant number of points, the clustering can be done in linear
time O(n), where n denotes the number of fibers. This drastically improves on existing techniques, which have
a high, quadratic running time, and it allows for an efficient whole brain fiber clustering. Furthermore, our new
algorithms can easily be used for detecting corresponding clusters in different brains without time-consuming
registration methods. We show a high reliability, robustness and efficiency of our new algorithms based on several
artificial and real fiber sets that include different elements of fiber architecture such as fiber kissing, crossing and
nested fiber bundles.
Advanced signal processing such as multi-resolution decomposition and three-dimensional processing and data
sets are gradually becoming a integral part of medical imaging. With the growing number of signal dimensions,
the bandwidth requirements increase exponentially. Because memory bandwidth is a scarce parameter,
this paper focusses on bandwidth optimization at the processor-chip level within multiprocessor systems. We
introduce a practical model including formulas for the computing, memory and cache read/write procedures to
optimize the mapping of data into the memory and cache for different configurations. A substantial performance
improvement is realized by a new memory-communication model that incorporates the data-dependencies of the
image-processing functions. More specifically, bandwidth optimization and minimization is achieved by implementing
two measures: (1) breaking down the algorithm such that the processing gets a locality that fits with
the cache size of the processor, and (2) a technique known from based on addressing and organizing the data
prior to processing in such a way that memory traffic is minimized. For the experiments, we have concentrated
particularly on image enhancement and noise reduction build around image pyramids for 3D X-ray data sets.
First experimental results show a bandwidth reduction in the order of 80% and a throughput increase of 60%
compared to straightforward implementations.
Dual-Energy CT makes it possible to separate contributions of different X-ray attenuation processes or materials in the
CT image. Thereby, standard Dual-Energy tissue classification techniques perform a so called material analysis or decomposition. The resulting material maps can then be used to perform explicit segmentation of anatomical structures such as osseous tissue in case of bone removal. As a drawback, information about tissue classes included in the scan must be known beforehand in order to choose the appropriate material analysis algorithms. We propose direct volume
rendering with bidimensional transfer functions as a tool for interactive and intuitive exploration of Dual-Energy scans.
Thereby, adequate visualization of the Dual-Energy histogram provides the basis for easily identifying different tissue classes. Transfer functions are interactively adjusted over the Dual-Energy histogram where the x- and y-axis correspond to the 80 kV and 140kV intensities respectively. GPU implementation allows precise fine-tuning of transfer functions with real time feedback in the resulting visualization. Additionally, per fragment filtering and post interpolative Dual-Energy tissue classification are provided. Moreover, interactive histogram exploration makes it possible to create adequate Dual-Energy visualizations without pre-processing or previous knowledge about existing tissue classes.
The two major volume visualization methods used in biomedical applications are Maximum Intensity Projection (MIP)
and Volume Rendering (VR), both of which involve the process of creating sets of 2D projections from 3D images.
We have developed a new method for very fast, high-quality volume visualization of 3D biomedical images, based on
the fact that the inverse of this process (transforming 2D projections into a 3D image) is essentially equivalent to
tomographic image reconstruction.
This new method uses the 2D projections acquired by the scanner, thereby obviating the need for the two
computationally expensive steps currently required in the complete process of biomedical visualization, that is, (i)
reconstructing the 3D image from 2D projection data, and (ii) computing the set of 2D projections from the
reconstructed 3D image
As well as improvements in computation speed, this method also results in improvements in visualization quality, and in
the case of x-ray CT we can exploit this quality improvement to reduce radiation dosage.
In this paper, demonstrate the benefits of developing biomedical visualization techniques by directly processing the
sensor data acquired by body scanners, rather than by processing the image data reconstructed from the sensor data. We
show results of using this approach for volume visualization for tomographic modalities, like x-ray CT, and as well as
At our institution, we are using dual-energy digital radiography (DEDR) as a cost-effective screening tool for the
detection of cardiac calcification. We are evaluating DEDR using CT as the gold standard. We are developing image
projection methods for the generation of digitally reconstructed radiography (DRR) from CT image volumes.
Traditional visualization methods include maximum intensity projection (MIP) and average-based projection (AVG) that
have difficulty to show cardiac calcification. Furthermore, MIP can over estimate the calcified lesion as it displays the
maximum intensity along the projection rays regardless of tissue types. For AVG projection, the calcified tissue is
usually overlapped with bone, lung and mediastinum. In order to improve the visualization of calcification on DRR
images, we developed a Gaussian-weighted projection method for this particular application. We assume that the CT
intensity values of calcified tissues have a Gaussian distribution. We then use multiple Gaussian functions to fit the
intensity histogram. Based on the mean and standard deviation parameters, we incorporate a Gaussian weighted function
into the perspective projection and display the calcification exclusively. Our digital and physical phantom studies show
that the new projection method can display tissues selectively. In addition, clinical images show that the Gaussian-weighted
projection method better visualizes cardiac calcification than either the AVG or MIP method and can be used
to evaluate DEDR as a screening tool for the detection of coronary artery diseases.
We describe an interactive multimodality display environment, which combines anatomic CT, MRI, functional MRI images and photographs taken during surgical procedures, to provide comprehensive localization information regarding epilepsy seizure foci and the context of their surroundings. Our environment incorporates several unique features, including GPU-accelerated volume rendering and image fusion, versatile GPU-based clipping of volumetric images, and the ability to enhance the information delivered to the surgeon by fusing a direct (photographic) view of the surgical field with the volumetric image. We employ direct volume rendering for the fusion of multiple volumes using GPU-accelerated ray-casting. In addition, to expose the internal structures during volume fusion, we have developed user interaction tools that enable the surgeon to explore the fused volume using clipping-cube and cutaway clipping schemes. The fusion of intraoperative images onto the image volume allows enhanced visualization of the surgical procedure sites within the surgical planning environment. These techniques have been implemented as Visualization Toolkit (VTK) classes using the OpenGL fragment shading program and Python modules, and have been successfully implemented within our surgical planning environment "EpilepsyViewer". The results and performance of our GPU-based approach are compared with similar techniques in VTK, demonstrating that the use of the GPU can greatly accelerate visualization and enable increased flexibility of the system in the operating room. The result of photographic overlay shows good correspondence between the intraoperative photograph images and the preoperative image model. This environment can also be extended for use in other neurosurgical planning tasks.
Our group has been developing medical image software systems since the early 1980s. Our latest system, CAVASS, is
freely available, open source, integrated with popular toolkits, and runs on Windows, Unix, Linux, and Mac OS. The
architecture of CAVASS incorporates parallel processing by exploiting inexpensive networks of workstations.
CAVASS is directed at the visualization, processing, and analysis of nD medical imagery, so support for large medical
imagery data and the efficient implementation of algorithms is given paramount importance. We describe the
architecture of CAVASS, the parallelization strategy, and present the results of comparing the implementation of
CAVASS algorithms with similar algorithms in ITK and VTK for a host of operations.
Dynamic volume rendering of the beating heart is an important element in cardiac disease diagnosis and therapy
planning, providing the clinician with insight into the internal cardiac structure and functional behavior. Most
clinical applications tend to focus upon a particular set of organ structures, and in the case of cardiac imaging,
it would be helpful to embed anatomical features into the dynamic volume that are of particular importance to
an intervention. A uniform transfer function (TF), such as is generally employed in volume rendering, cannot
effectively isolate such structures because of the lack of spatial information and the small intensity differences
between adjacent tissues. Explicit segmentation is a powerful way to approach this problem, which usually
yields a single binary mask volume (MV), where a unit value in a voxel within the MV acts as a tag label
representing the anatomical structure of interest (ASOI). These labels are used to determine the TF employed
to adjust the ASOI display. Traditional approaches for rendering such segmented volumetric datasets usually
deliver unsatisfactory results, such as noninteractive rendering speed, low image quality, intermixing artifacts
along the rendered subvolume boundaries, and speckle noise. In this paper, we introduce a new "color coding"
approach, based on the graphics processing unit (GPU) accelerated raycasting algorithm and a pre-integrated
voxel classification method, to address this problem. The mask tag labels derived from segmentation are first
smoothed with a Gaussian filter, and multiple TFs are designed for each of the MVs and the source cardiac
volume respectively, mapping the voxel's intensity to color and opacity at each sampling point along the casting
ray. The resultant values are composited together using a boundary color adjustment technique, which acts as
"coding" the segmented anatomical structure information into the rendered source volume of the beating heart.
Our algorithm produces high image quality in real-time without introducing intermixing artifacts in the rendered
4-dimensional (4D) cardiac volumes.
In this paper, we propose a new visualization method for head MRA data which supports the user to easily determine the
positioning of MPR images and/or MIP images based on the blood vessel network structure (the anatomic location of
blood vessels). This visualization method has following features: (a) the blood vessel (cerebral artery) network structure
in 3D head MRA data is portrayed the 3D line structure; (b) the MPR or MIP images are combined with the blood vessel
network structure and displayed in a 3D visualization space; (c) the positioning of MPR or MIP is decided based on the
anatomic location of blood vessels; (d) The image processing and drawing can be operated at real-time without a special
hardware accelerator. As a result, we believe that our method is available to position MPR images or MIP images related
to the blood vessel network structure. Moreover, we think that the user using this method can obtain the 3D information
(position, angle, direction) of both these images and the blood vessel network structure.
The presence of a liver disease such as cirrhosis can be determined by examining the proliferation of
collagen fiber from a tissue slide stained with special stain such as the Masson's trichrome(MT) stain.
Collagen fiber and smooth muscle, which are both stained the same in an H&E stained slide, are stained
blue and pink respectively in an MT-stained slide. In this paper we show that with multispectral imaging
the difference between collagen fiber and smooth muscle can be visualized even from an H&E stained
image. In the method M KL bases are derived using the spectral data of those H&E stained tissue
components which can be easily differentiated from each other, i.e. nucleus, cytoplasm, red blood cells,
etc. and based on the spectral residual error of fiber weighting factors are determined to enhance spectral
features at certain wavelengths. Results of our experiment demonstrate the capability of multispectral
imaging and its advantage compared to the conventional RGB imaging systems to delineate tissue
structures with subtle colorimetric difference.