ITK and ANALYZE: a synergistic integration
Author(s):
Kurt E. Augustine;
David R. Holmes III;
Richard A. Robb
Show Abstract
The Insight Toolkit (ITK) is a C++ open-source software toolkit developed under sponsorship of the National Library of
Medicine. It provides advanced algorithms for performing image registration and segmentation, but does not provide
support for visualization and analysis, nor does it offer any graphical user interface (GUI). The purpose of this
integration project is to make ITK readily accessible to end-users with little or no programming skills, and provide
interactive processing, visualization and measurement capabilities. This is achieved through the integration of ITK with
ANALYZE, a multi-dimension image visualization/analysis application installed in over 300 institutions around the
world, with a user-base in excess of 4000. This integration is carried out at both the software foundation and GUI levels.
The foundation technology upon which ANALYZE is built is a comprehensive C-function library called AVW. A new
set of AVW-ITK functions have been developed and integrated into the AVW library, and four new ITK modules have
been added to the ANALYZE interface. Since ITK is a software developer’s toolkit, the only way to access its intrinsic
power is to write programs that incorporate it. Integrating ITK with ANALYZE opens the ITK algorithms to end-users
who otherwise might never be able to take advantage of the toolkit’s advanced functionality. In addition, this integration
provides end-to-end interactive problem solving capabilities which allow all users, including programmers, an integrated
system to readily display and quantitatively evaluate the results from the segmentation and registration routines in ITK,
regardless of the type or format of input images, which are comprehensively supported in ANALYZE.
The medical imaging interaction toolkit (MITK): a toolkit facilitating the creation of interactive software by extending VTK and ITK
Author(s):
Ivo Wolf;
Marcus Vetter;
Ingmar Wegner;
Marco Nolden;
Thomas Bottger;
Mark Hastenteufel;
Max Schobinger;
Tobias Kunert;
Hans-Peter Meinzer
Show Abstract
The aim of the Medical Imaging Interaction Toolkit (MITK) is to facilitate the creation of clinically usable
image-based software. Clinically usable software for image-guided procedures and image analysis require a high
degree of interaction to verify and, if necessary, correct results from (semi-)automatic algorithms. MITK is
a class library basing on and extending the Insight Toolkit (ITK) and the Visualization Toolkit (VTK). ITK
provides leading-edge registration and segmentation algorithms and forms the algorithmic basis. VTK has
powerful visualization capabilities, but only low-level support for interaction (like picking methods, rotation,
movement and scaling of objects). MITK adds support for high level interactions with data like, for example, the
interactive construction and modification of data objects. This includes concepts for interactions with multiple
states as well as undo-capabilities. Furthermore, VTK is designed to create one kind of view on the data
(either one 2D visualization or a 3D visualization). MITK facilitates the realization of multiple, different views
on the same data (like multiple, multiplanar reconstructions and a 3D rendering). Hierarchically structured
combinations of any number and type of data objects (image, surface, vessels, etc.) are possible. MITK can
handle 3D+t data, which are required for several important medical applications, whereas VTK alone supports
only 2D and 3D data. The benefit of MITK is that it supplements those features to ITK and VTK that are
required for convenient to use, interactive and by that clinically usable image-based software, and that are
outside the scope of both. MITK will be made open-source (http://www.mitk.org).
The 4D Cluster Visualization project
Author(s):
Michael J. Redmond;
Ethan K. Brodsky;
Yu-Hen Hu;
Tom M. Grist;
Michael J. Schulte;
Walter F. Block
Show Abstract
It is becoming increasingly common to image time-resolved flow patterns through the vascular system in all three
spatial dimensions using non-invasive methods. The capability to generate four-dimensional (4D) (x, y, z and time)
vascular flow data is growing in several modalities. Vastly undersampled Isotropic PRojection (VIPR) is one such
method using high-resolution, fast Magnetic Resonance Imaging (MRI) of the vasculature system during intravenous
contrast injection. VIPR currently produces 4D data sets of twenty to forty frames of 2563 voxels each, and stronger
magnets will allow higher resolution time series that generate gigabytes of data. Real-time visualization and analysis of
4D data can quickly overwhelm the memory and processing capabilities of desktop workstations. 4D Cluster
Visualization (4DCV) offers a straightforward, scalable approach to interactively display and manipulate 4D,
reconstructed, VIPR data sets. 4DCV exploits the inherently parallel nature of 4D frame data to interactively manipulate
and render individual 3D data frames simultaneously across all nodes of a visualization cluster. An interactive
animation is produced in real-time by reading back the 2D rendered results to a central animation console where the
image sequence is assembled into a continuous stream for display. Basic 4DCV can be extended to allow rendering of
multiple frames on one node, compression of image streams for serving remote clinical workstations, and local archival
storage of 3D data frames at the cluster nodes for quick retrieval of medical exams. 4D Cluster Visualization concepts
can also be extended to distributed and Grid implementations.
The design and implementation of a C++ toolkit for integrated medical image processing and analyzing
Author(s):
Mingchang Zhao;
Jie Tian;
Xun Zhu;
Jian Xue;
Zhanglin Cheng;
Hua Zhao
Show Abstract
With the success of VTK and ITK, there are more attentions to the toolkit development issue in medical imaging
community. This paper introduces MITK, an integrated medical image processing and analyzing toolkit. Its main
purpose is to provide a consistent framework to combine the function of medical image segmentation, registration and
visualization. The design goals, overall framework and implementation of some key technologies are provided in
details, and some application examples are also given to demonstrate the ability of MITK. We hope that MITK will
become another available choice for the medical imaging community.
A medical imaging and visualization toolkit in Java
Author(s):
Su Huang;
Rafail Baimouratov;
Pengdong Xiao;
Anand Ananthasubramaniam;
Wieslaw L. Nowinski
Show Abstract
Medical imaging research and clinical applications usually require combination and integration of different
technology from image processing to realistic visualization to user-friendly interaction. Researchers with
different background and from various research areas have been using numerous types of hardware,
software and environments to produce their research results. It is unusual that students must build their
working and testing tools from scratch again and again. A generic and flexible medical imaging and
visualization toolkit would be helpful in medical research and educational institutes to reduce redundant
development work and hence prompt their research efficiency. In our lab, we have developed a Medical
Imaging and Visualization Toolkit (BIL-kit), which is a set of comprehensive libraries as well as a number
of interactive tools. It covers a wide range of fundamental functions from image conversion and
transformation, image segmentation and analysis, to geometric model generation and manipulation, all the
way up to 3D visualization and interactive simulation. The toolkit design and implementation emphasize
the reusability and flexibility. BIL-kit is implemented by using Java language because of its advantage in
platform independent, so that the toolkit will work in hybrid and dynamics research and educational
environments. This also allows the toolkit to extend its usage in web based application development. BILkit
is a suitable platform for researchers and students to develop visualization and simulation prototypes as
well as it can also be used for development of clinical applications.
Automatic calibration of an optical see-through head-mounted display for augmented reality applications in computer assisted interventions
Author(s):
Michael Figl;
Christopher Ede;
Wolfgang Birkfellner;
Johann Hummel;
Rudolf Seemann;
Helmar Bergmann
Show Abstract
We are developing an optical see through head mounted display in which preoperative planning data provided
by a computer aided surgery system is overlaid to the optical image of the patient.
In order to cope with head movements of the surgeon the device has to be calibrated for a wide zoom and
focus range. For such a calibration accurate and robust localization of a huge amount of calibration points is
of utmost importance. Because of the negligible radial distortion of the optics in our device, we were able to
use projective invariants for stable detection of the calibration fiducials on a planar grid. The pattern at the
planar grid was designed using a different cross ratio for four consecutive points in x respectively y direction.
For automated image processing we put a CCD camera behind the eye piece of the device. The resulting image
was thresholded and segmented, after deleting the artefacts a Sobel edge detector was applied and the image
was Hough transformed to detect the x and y axes. Then the world coordinates of fiducial points on the grid
could be detected.
A series of six camera calibrations with two zoom settings was done. The mean values of the errors for the
two calibrations were 0.08 mm respectively 0.3 mm.
VR-based interactive CFD data comparison of flow fields in a human nasal cavity
Author(s):
Andreas Gerndt;
Torsten Kuhlen;
Thomas van Reimersdahl;
Matthias Haack;
Christian Bischof
Show Abstract
The Virtual Reality Center Aachen is developing a Virtual Reality based operation planning system in cooperation with
aerodynamics scientists and physicians of several clinical centers. This system is meant to help the preparation of nose
surgeries aimed at the elimination of respiratory diseases. A core part is the interactive comparison of experimental data
and simulation data in the area of fluid dynamics. In a first step, data comparison is to depict the differences between
healthy noses and diseased noses. Later on, data comparison should supply evidence for successful virtual surgeries,
which finally results in guidance on the real operation.
During virtual surgery sessions, scientists can interactively explore, analyze, annotate, and compare various medical and
aerodynamics data sets. Image-based methods are used to extract several features in one image and between compared
data sets. The determination of linked features between different data sets is a particular challenge because of their
different time frames, scales, and distortions. An optimized human computer interface enables the user to interact
intuitively within a virtual environment in order to select and deal with these data sets. Additionally to this interactive
exploration, the system also allows automatic searches for cut plane and key frame candidates corresponding to given
feature patterns.
The comparison system makes use of an already implemented parallelized Computational Fluid Dynamics (CFD) postprocessing,
which also extracts enhanced flow features that allow automatic detection of relevant flow regions. Beside
vortex detection, the computation of critical points including flow field segmentation is a current research activity.
These flow features are favored characteristics for the comparison and help considerably to classify different nose
geometries and operation recommendations.
Augmented-reality-based segmentation refinement
Author(s):
Alexander Bornik;
Bernhard Reitinger;
Reinhard Beichel;
Erich Sorantin;
Georg Werkgartner
Show Abstract
Planning of surgical liver tumor resections based on image data from X-ray computed tomography requires
correct segmentation of the liver, liver vasculature and pathological structures. Automatic liver segmentation
methods frequently fail in cases where the anatomy is degenerated by lesions or other present liver diseases. On
the other hand performing a manual segmentation is a tedious and time consuming task. Therefore Augmented
Reality based segmentation refinement tools are reported, that aid radiologists to efficiently correct incorrect
segmentations in true 3D using head-mounted displays and tracked input devices. The developed methods
facilitate segmentation refinement by interactively deforming a mesh data structure reconstructed from an initial
segmentation. The variety of refinement methods are all accessible through the intuitive, direct 3D user interface
of an Augmented Reality system.
Tools for augmented-reality-based liver resection planning
Author(s):
Bernhard Reitinger;
Alexander Bornik;
Reinhard Beichel;
Georg Werkgartner;
Erich Sorantin
Show Abstract
Surgical resection has evolved to an accepted and widely-used method for the treatment of liver tumors. In order
to elaborate an optimal resection strategy, computer-aided planning tools are required. However, measurements
based on 2D cross sectional images are difficult to perform. Moreover, resection planning with current desktopbased
systems using 3D visualization is also a tedious task because of limited 3D interaction. For facilitating the
planning process, different tools are presented allowing easy user interaction in an Augmented Reality environment.
Methods for quantitative analysis like volume calculation and distance measurements are discussed with
focus on the user interaction aspect. In addition, a tool for automatically generating anatomical resection proposals
based on knowledge about tumor locations and the portal vein tree is described. The presented methods
are part of an evolving liver surgery planning system which is currently evaluated by physicians.
Augmented reality system for MR-guided interventions: phantom studies and first animal test
Author(s):
Sebastian Vogt;
Frank Wacker;
Ali Khamene;
Daniel R. Elgort;
Tobias Sielhorst;
Heinrich Niemann;
Jeff Duerk;
Jonathan S. Lewin;
Frank Sauer
Show Abstract
We developed an augmented reality navigation system for MR-guided interventions. A head-mounted display provides in real-time a stereoscopic video-view of the patient, which is augmented with three-dimensional medical information to perform MR-guided needle placement procedures. Besides with the MR image information, we augment the scene with 3D graphics representing a forward extension of the needle and the needle itself. During insertion, the needle can be observed virtually at its actual location in real-time, supporting the interventional procedure in an efficient and intuitive way. In this paper we report on quantitative results of AR guided needle placement procedures on gel phantoms with embedded targets of 12mm and 6mm diameter; we furthermore evaluate our first animal experiment involving needle insertion into deep lying anatomical structures of a pig.
User performance analysis of different image-based navigation systems for needle placement procedures
Author(s):
Fred S. Azar;
Nathalie Perrin;
Ali Khamene;
Sebastian Vogt;
Frank Sauer
Show Abstract
We present a user performance analysis of four navigation systems based on different visualization schemes (2D, 3D,
stereoscopy on a monitor, and a stereo head mounted display (HMD)). We developed a well-defined user workflow,
which starts with the selection of a safe and efficient needle path, followed by the placement, insertion and removal of
the needle. We performed the needle procedure on a foam-based phantom, targeting a virtual lesion while avoiding
virtual critical structures. The phantom and needle’s position and orientation were optically tracked in real-time. 28
users performed each a total of 20 needle placements, on five phantom configurations using the four visualization
schemes. Based on digital measurements, and on qualitative user surveys, we computed the following parameters:
accuracy and duration of the procedure, user progress, efficiency, confidence, and judgment. The results show that all
systems are about equivalent when it comes to reaching the center of the target. However the HMD- and 2D- based
systems performed better in avoiding the surrounding structures. The needle procedures were performed in a shorter
amount of time using the HMD- and 3D- based systems. With appropriate user training, procedure time for the 2D-
based system decreased significantly.
Bone morphing with statistical shape models for enhanced visualization
Author(s):
Kumar T. Rajamani;
Johannes Hug;
Lutz Peter Nolte;
Martin Styner
Show Abstract
This paper addresses the problem of extrapolating extremely sparse three-dimensional set of digitized landmarks
and bone surface points to obtain a complete surface representation. The extrapolation is done using a statistical
principal component analysis (PCA) shape model similar to earlier approaches by Fleute et al. This extrapolation
procedure called Bone-Morphing is highly useful for intra-operative visualization of bone structures in image-free
surgeries. We developed a novel morphing scheme operating directly in the PCA shape space incorporating the
full set of possible variations including additional information such as patient height, weight and age. Shape
information coded by digitized points is iteratively removed from the PCA model. The extrapolated surface is
computed as the most probable surface in the shape space given the data. Interactivity is enhanced, as additional
bone surface points can be incorporated in real-time. The expected accuracy can be visualized at any stage of
the procedure. In a feasibility study, we applied the proposed scheme to the proximal femur structure. 14
CT scans were segmented and a sequence of correspondence establishing methods was employed to compute the
optimal PCA model. Three anatomical landmarks, the femoral notch and the upper and the lower trochanter are
digitized to register the model to the patient anatomy. Our experiments show that the overall shape information
can be captured fairly accurately by a small number of control points. The added advantage is that it is fast,
highly interactive and needs only a small number of points to be digitized intra-operatively.
High-resolution three-dimensional Talairach labels for human brain mapping using shape-based interpolation
Author(s):
Srinivasan Rajagopalan;
Richard A. Robb
Show Abstract
Since the 1810 cranioscopy claims of F.J.Gall, human brain mapping has evolved into a challenging but fascinating scientific endeavor. The works of Jean Talairach in stereotaxic neurosurgery has revolutionized the use of brain atlases to identify the spatial locations of brain activations derived from functional images. The availability of digital print atlases, standardization of Talairach coordinates as means of reporting activation spots and the availability of Talairach daemon has led to the proliferation of publications in human brain mapping. However, the VOTL database used in the Talairach daemon employs nearest-neighbor interpolation of the sparse and unevenly spaced Talairach atlas. This exacerbates the already existing errors in brain mapping. This paper introduces the use of a shape based interpolation algorithm to derive a high resolution three dimensional Talairach Atlas. It uses a feature-guided approach for shape-based interpolation of porous and tortuous binary objects. The feature points are derived from the boundaries of the candidate sources and matched non-linearly using a robust outlier rejecting, non-linear point matching algorithm based on thin plate splines. The proposed scheme correctly handles objects with holes, large offsets and drastic invagination and significantly enhances the sparse Talairach Atlas. A similar approach applied to Schalten-brand and Wahren atlas would add appreciable value to functional neurosurgery.
Identification of deformation using invariant surface information
Author(s):
David Marshall Cash;
Tuhin K. Sinha;
Cheng-Chun Chen;
Benoit M. Dawant;
William C. Chapman;
Michael I. Miga;
Robert L. Galloway Jr.
Show Abstract
To compensate for soft-tissue deformation during image-guided
surgical procedures, non-rigid methods are often used as
compensation. However, most of these algorithms first implement a
rigid registration to provide an initial alignment. In liver tumor
resections, the organ is deformed on a large scale, causing visual
shape change on the organ. Unlike neurosurgery, there is no rigid
reference available, so the initial rigid alignment is based on
the organ surface. Any deformation present might lead to
misalignment of non-deformed areas. This study attempts to
develop a technique for the identification of organ deformation
and its separation from the problem of rigid alignment. The basic
premise is to identify areas of the surface that are minimally
deformed and use only these regions for a rigid registration. To
that end, two methods were developed. First, the observation is
made that deformations of this scale cause noticeable changes in
measurements based on differential geometry, such as surface
normals and curvature. Since these values are sensitive to noise,
smooth surfaces were tesselated from point cloud representations.
The second approach was to develop a cost function which rewarded
large regions with low closest point distances. Experiments were
performed using analytic and phantom data, acquiring surface data
both before and after deformation. Multiple registration trials
were performed by randomly perturbing the post-deformed surface
from a ground truth position. After registration, subsurface
target positions were compared with those of the ground truth.
While the curvature-based algorithm was successful with analytic
data, it could not identify enough significant changes in the
surface to be useful for phantom data. The minimal distance
algorithm proved much more effective in separating the
registration, providing significantly improved error measurements
for subsurface targets throughout the whole surface.
Slice-based prostate segmentation in 3D US images based on continuity constraint
Author(s):
Mingyue Ding;
Igor Gyacskov;
Xiaping Yuan;
Maria Drangova;
Aaron Fenster
Show Abstract
Slice-based 3D segmentation is a semi-automatic segmentation approach that is used to segment the prostate from
3D ultrasound (US) images. First, the prostate is re-sliced rotationally around a pre-selected rotational axis passing
through the approximate center of the prostate. Using a deformable model, an intial guess is used to refine the prostate
boundary to better fit the outline of the prostate in an initial 2D slice. Then, the refined contour is propagated to its
adjacent slices and deformed. This proceudre is repeated until all slices are segmented. Unfortunately, in this
segmentation approach, the segmented contour may not fit the actual prostate boundary properly, propagating the
segmentation error and making it larger. In this paper, we add a continuity constraint in the slice-based 3D
segmentation approach by using an autoregressive (AR) model to correct the endpoint propagation in a cross-sectional
plane perpendicular to the rotational axis. Experiments with 6 patient prostate 3D US images demonstrated that our
method can obtain a smooth segmented prostate and the average distance between our algorithmic and the manually
segmented 2D prostates on the cross-sectional plane was about 0.8mm less than the distance between the algorithmic
segmentation without using the continuity constraint and the manually segmented 2D prostates on the cross-sectional
plane.
A novel method for pulmonary emboli visualization from high-resolution CT images
Author(s):
Eric Pichon;
Carol L. Novak;
Atilla P. Kiraly;
David P. Naidich
Show Abstract
Pulmonary Embolism (PE) is one of the most common causes of unexpected death in the US. The recent introduction of 16-slice Computed Tomography (CT) machines allows the acquisition of very high-resolution datasets. This has made CT a more attractive means for diagnosing PE, especially for previously difficult to identify small subsegmental peripheral emboli. However, the large size of these datasets makes it desirable to have an automated method to help radiologists focus directly on potential candidates that might otherwise be overlooked. We propose a novel method to highlight potential PEs on a 3D representation of the pulmonary arterial tree. First lung vessels are segmented using mathematical morphology techniques. The density values inside the vessels are then used to color the outside of a Shaded Surface Display (SSD) of the vessel tree. As PEs are clots of significantly lower Hounsfield Unit (HU) values than surrounding contrast-enhanced blood, they appear as salient contrasted patches in this 3D rendering. During preliminary testing on 6 datasets 19 PEs out of 22 were detected (sensitivity 86%) with 2 false positives for every true positive (Positive Predictive Value 33%).
Visualization techniques for tongue analysis in traditional Chinese medicine
Author(s):
Binh L. Pham;
Yang Cai
Show Abstract
Visual inspection of the tongue has been an important diagnostic method of Traditional Chinese Medicine (TCM). Clinic data have shown significant connections between various viscera cancers and abnormalities in the tongue and the tongue coating. Visual inspection of the tongue is simple and inexpensive, but the current practice in TCM is mainly experience-based and the quality of the visual inspection varies between individuals. The computerized inspection method provides quantitative models to evaluate color, texture and surface features on the tongue. In this paper, we investigate visualization techniques and processes to allow interactive data analysis with the aim to merge computerized measurements with human expert's diagnostic variables based on five-scale diagnostic conditions: Healthy (H), History Cancers (HC), History of Polyps (HP), Polyps (P) and Colon Cancer (C).
Real-time interactive visualization and manipulation of the volumetric data using GPU-based methods
Author(s):
Carlos Augusto Dietrich;
Luciana Porcher Nedel;
Silvia Delgado Olabarriaga;
Joao Luiz Dihl Comba;
Dinamar José Zanchet;
Ana Maria Marques da Silva;
Edna Frasson de Souza Montero
Show Abstract
This work presents a set of tools developed to provide 3D visualization and interaction with large volumetric data that relies on recent programmable capabilities of consumer-level graphics cards. We are exploiting the programmable control of calculations performed by the graphics hardware for generating the appearance of each pixel on the screen to develop real-time, interactive volume manipulation tools. These tools allow real-time modification of visualization parameters, such as color and opacity classification or the selection of a volume of interest, extending the benefit of hardware acceleration beyond display, namely for computation of voxel visibility. Three interactive tools are proposed: a cutting tool that allows the selection of a convex volume of interest, an eraser-like tool to eliminate non-relevant parts of the image and a digger-like tool that allows the user to eliminate layers of a 3D image. To interactively apply the proposed tools on a volume, we are making use of some so known user interaction techniques, as the ones used in 2D painting systems. Our strategy is to minimize the user entrainment efforts involved in the tools learning. Finally, we illustrate the potential application of the conceived tools for preoperative planning of liver surgery and for liver vascular anatomy study. Preliminary results concerning the system performance and the images quality and resolution are presented and discussed.
Automatic depth determination for sculpting based on volume rendering
Author(s):
Jaeyoun Yi;
Jong Beom Ra
Show Abstract
An interactive sculpting tool is being widely used to segment a 3-D object on a volume rendered image for improving the intuitiveness. However, it is very hard to segment only an outer part of a 3-D object, since the conventional method cannot handle the depth of removal. In this paper, we present an effective method to determine the depth of removal, by using the proposed spring-rod model and the voxel-opacity. To determine the depth of removal, the 2-D array of rigid rods is constructed after a 2-D closed loop is defined on a volume-rendered image by a user. Each rigid rod is located at a digitized position inside the user-drawn closed loop and its direction is coincident with that of projecting rays. And every rod has a frictionless ball, which is interconnected with its neighboring balls through ideal springs. In addition, we assume that an external force defined by the corresponding voxel-opacity value is exerted on each ball along the direction of the projected ray. Using this spring-rod system model, we can determine final positions of balls, which represent the depths of removal. Then, the outer part can be properly removed. The proposed method is applied to various medical image data and is evaluated to provide robust results with easy user-interaction.
Haptic vascular modeling and visualization in web-enabled interventional neuroradiology simulation system
Author(s):
Yiping Lu;
Xin Ma;
KiaFock Loe;
CheeKong K. Chui;
Wieslaw L. Nowinski
Show Abstract
Virtual reality system for Minimally Invasive Surgery (MIS) is a challenging problem in the context of the World Wide
Web. In this paper, we present a framework of web-enabled interventional neuroradiology simulation system with force
feedback. Based on the hierarchical information from segmented human vascular images, we produce the small datasize
control mesh of the vasculature and finally get a smooth vascular model. When a collision occurs, we calculate the
volume of force feedback according to physical parameters under which the collision occurs and give the trainee a
haptic feedback by the force feedback hardware that connects to the simulation system. Our method has three features:
1) the vascular model exhibits little memory consumption; 2) the vascular model delivers good rendering performance;
3) the collision detection along with force feedback computation model is a distributed one and can provide good real
time reaction to the user. The initial result obtained from applying the method in our prototype of a web-enabled
simulation system is encouraging: the 3D visualization of human vasculature and the haptic feedback mechanism
present the trainee a vivid surgical simulation environment and the real-time force reaction is also an exciting feature
for web-enabled surgical simulation system.
Fast treatment planning with IVUS imaging in intravascular brachytherapy
Author(s):
Raffaele Novario;
Carla Bianchi;
Rita Lorusso;
Chiara Sampietro;
Fabio Tanzi;
Leopoldo Conte;
Mario Vescovi;
Massimo Caccia;
Mario Alemi;
Chiara Cappellini
Show Abstract
The planned target volume in intracoronary brachytherapy is the vessel wall. The success of the treatment is based on the need of delivering doses possibly not lower than 8 and not higher than 30 Gy.
An automatic procedure in order to acquire intravascular ultrasound images of the whole volume to be irradiated is pointed out; a motor driven pullback device, with velocity of the catheter of 0.5 and 1 mm/s allows to acquire the entire target volume of the vessel with a number of slices normally ranging from 400 to 1600.
A semiautomatic segmentation and classification of the different structures in each slice of the vessel is proposed. The segmentation and the classification of the structures allows the calculation of their volume; this is very useful in particular for plaque volume assessment in the follow-up of the patients. A 3D analyser tool was developed in order to visualize the walls and the lumen of the vessel. The knowledge, for each axial slice, of the position of the source (in the centre of the catheter) and the position of the target (vessel walls) allows the calculation of a set of source-target distances. Given a time of irradiation, and a type of source a dose volume histogram (DVH) describing the distribution of the doses in the whole target can be obtained. The whole procedure takes few minutes and then is compatible with a safe treatment of the patient, giving an important indication about the quality of the radiation treatment selected.
Tracking alignment of sparse ultrasound with preoperative images of the liver and an interventional plan using models of respiratory motion and deformation
Author(s):
Jane M. Blackall;
Graeme P. Penney;
Andrew P. King;
Andreas N. Adam;
David J. Hawkes
Show Abstract
We present a method for non-rigid registration of preoperative magnetic resonance (MR) images and an interventional plan to sparse intraoperative ultrasound (US) of the liver. Our clinical motivation is to enable the accurate transfer of information from preoperative imaging modalities to intraoperative ultrasound to aid needle placement for thermal ablation of liver metastases. An inital rigid registration to intraoperative coordinates is obtained using a set of ultrasound images acquired at maximum exhalation. A pre-processing step is applied to both the MR and US images. The preoperative image and plan are then aligned to a single ultrasound slice acquired at an unknown point in the breathing cycle where the liver is likely to have moved and deformed relative to the preoperative image. Alignment is constrained using a patient-specific model of breathing motion and deformation. Target registration error is estimated by carrying out simulation experiments using sparsely re-sliced MR volumes in place of real ultrasound and comparing the registration results to a gold-standard registration performed on the full MR volume. Experiments using real ultrasound are then carried out and verified using visual inspection.
Ultrasound 3D volume reconstruction from an optically tracked endorectal ultrasound (TERUS) probe
Author(s):
John R. Warmath;
Philip Bao M.D.;
Alan J. Herline M.D.;
Robert L. Galloway Jr.
Show Abstract
Endorectal Ultrasound (ERUS) is essential for the accurate staging of rectal cancer. Staging is important to the treatment of patients with rectal cancer because it will determine whether the patient receives preoperative radiotherapy for the purpose of tumor downstaging. ERUS images are intrinsically different from images taken by Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) in that ultrasound provides 2D images while CT and MRI provide 3D data sets that can be rendered into volumes and then re-sliced and viewed as 2D images in any desired orientation. This fundamental difference between ultrasound and tomographic imaging modalities creates a problem when a direct comparison between ultrasound and CT or MRI is desired. To accomplish the goal of following tumor volume over time, an accurate ultrasound volume must be constructed. By optically tracking the ERUS probe as data is collected, the intensity value for each pixel is saved and then inserted into the nearest voxel in the ERUS volume matrix. We validate the accuracy of volume reconstruction by finding the 3D coordinates of targets that are inside of the ERUS volume and comparing them to their known physical locations.
Tracked ultrasound for laparoscopic surgery
Author(s):
Philip Bao M.D.;
John R. Warmath;
Benjamin Poulose M.D.;
Robert L. Galloway Jr.;
Alan J. Herline M.D.
Show Abstract
We calibrate a tracked laparoscopic ultrasound probe for application in image-guided surgery and 3-D volume reconstruction. With a plane-mapping technique, the spatial relationship between the ultrasound beam emitted from the tip of the probe to the local coordinate system of the probe was determined by mapping it with an optically tracked pointer. A cross-wire calibration technique was also performed for comparison. The accuracy and precision of the calibrated probe was evaluated by measuring its ability to localize targets in a water bath. Target registration error depended upon probe position, varying from an average 0.88 mm for the fixed probes to 6.09 mm for a moving probe. This error can be reduced to 4.54 mm by accounting for target localization error which is the error determining the position of the probe itself. These results validate the plane-mapping calibration technique for this type of ultrasound probe, and better probe tracking is expected to reduce the overall registration error.
CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API
Author(s):
Emad M. Boctor;
Anand Viswanathan;
Steve Pieper;
Michael A. Choti M.D.;
Russell H. Taylor;
Ron Kikinis M.D.;
Gabor Fichtinger
Show Abstract
Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.
Endovascular navigation based on real/virtual environments cooperation for computer-assisted TEAM procedures
Author(s):
Cemil Goksu;
Pascal Haigron;
Oscar Acosta;
Antoine Lucas
Show Abstract
Transfemoral Endovascular Aneurysm Management, the less invasive treatment of Aortic Abdominal Aneurysms (AAA), is a highly specialized procedure, using advanced devices and requiring a high degree of clinical expertise. There is a great need for a navigation guidance system able to make this procedure safer and more precise. In this context of computer-assisted minimally invasive interventional procedures, we propose a new framework based on the cooperation between the real environment where the intervention takes place and a patient-specific virtual environment, which contains a virtual operating room including a C-arm model as well as the 3D preoperative patient data. This approach aims to deal with the problem of lack of knowledge about soft tissue behavior by better exploiting available information before and during the intervention through a cooperative approach. In order to assist the TEAM procedure in standard interventional conditions, we applied this framework to design a 3D navigation guidance system, which has been successfully used during three TEAM interventions in the operating room. Intra-operatively, anatomical feature-based 2D/3D registration between a single 2D fluoroscopic view, reproduced from the pose planned in the virtual environment, and the preoperative CT volume, is performed by means of a chamfer distance map. The 3D localization of the endovascular devices (sheath, guide wire, prosthesis) tracked either interactively or automatically on 2D sequences, is constrained to either the 3D vascular tree or a 3D device model. Moreover, we propose a first solution to take into account the tissue deformations during this particular intervention and to update the virtual environment with the intraoperative data.
Advanced and standardized evaluation of neurovascular compression syndromes
Author(s):
Peter Hastreiter;
Fernando Vega Higuera;
Bernd Tomandl M.D.;
Rudolf Fahlbusch M.D.;
Ramin Naraghi M.D.
Show Abstract
Caused by a contact between vascular structures and the root entry or exit zone of cranial nerves neurovascular compression syndromes are combined with different neurological diseases (trigeminal neurolagia, hemifacial spasm, vertigo, glossopharyngeal neuralgia) and show a relation with essential arterial hypertension. As presented previously, the semi-automatic segmentation and 3D visualization of strongly T2 weighted MR volumes has proven to be an effective strategy for a better spatial understanding prior to operative microvascular decompression. After explicit segmentation of coarse structures, the tiny target nerves and vessels contained in the area of cerebrospinal fluid are segmented implicitly using direct volume rendering. However, based on this strategy the delineation of vessels in the vicinity of the brainstem and those at the border of the segmented CSF subvolume are critical. Therefore, we suggest registration with MR angiography and introduce consecutive fusion after semi-automatic labeling of the vascular information. Additionally, we present an approach of automatic 3D visualization and video generation based on predefined flight paths. Thereby, a standardized evaluation of the fused image data is supported and the visualization results are optimally prepared for intraoperative application. Overall, our new strategy contributes to a significantly improved 3D representation and evaluation of vascular compression syndromes. Its value for diagnosis and surgery is demonstrated with various clinical examples.
Automatic adjustment of bidimensional transfer functions for direct volume visualization of intracranial aneurysms
Author(s):
Fernando Vega Higuera;
Natascha Sauber;
Bernd Tomandl;
Christopher Nimsky;
Guenther Greiner;
Peter Hastreiter
Show Abstract
Direct volume visualization of computer tomography data is based on the mapping of data values to colors and opacities with lookup-tables known as transfer functions (TF). Often, limitations of one-dimensional TF become evident when it comes to the visualization of aneurysms close the skull base. Computer tomography angiography data is used for the 3D-representation of the vessels filled with contrast medium. The reduced intensity differences between osseous tissue and contrast medium lead to strong artifacts and ambiguous visualizations. We introduced the use of bidimensional TFs based on measured intensities and gradient magnitudes for the visualization of aneurysms involving the skull base. The obtained results are clearly superior to a standard approach with one-dimensional TFs. Nevertheless, the additional degree of freedom increases the difficulty involved in creating adequate TFs. In order to address this problem, we introduce automatic adjustment of bidimensional TFs through a registration of respective 2D histograms. Initially, a dataset is set as reference and the information contained in its 2D histogram (intensities and gradient magnitudes) is used to create a TF template which produces a clear visualization of the vessels. When a new dataset is examined, elastic registration of the reference and target 2D histograms is carried out. The resulting free form deformation is then used for the automatic adjustment of the reference TF, in order to automatically obtain a clear volume visualization of the vascular structures within the examined dataset. Results are comparable to manually created TFs. This approach makes it possible to successfully use bidimensional TFs without technical insight and training.
Mapping the coronary arteries on a sphere in CT angiography
Author(s):
Guy A. Lavi
Show Abstract
Current approaches for coronary artery inspection using cardiac CT angiography scans include curved planar reformation (CPR), slab maximum-intensity projection (MIP) and volume rendering (VR) techniques. While the first two allow a detailed examination of only one vessel or a few segments of the coronary artery tree at a time, the VR techniques are not considered suitable for a thorough clinical assessment. An innovative concept of visualization aimed at revealing the entire coronary tree in a CPR-type environment is presented. The new approach uses a sphere or an ellipsoid as a base surface to map the coronary tree. Using the spherical (or ellipsoidal) coordinate system a “true” surface running through the centerlines of all the vessels is defined. Resampling the volume data with this (preferably thick) surface and using a maximum-intensity projection will produce three possible modes of visualization. In one mode the “true form” surface is texture-mapped with the resampled volume data, while in another the data is projected onto the sphere that served as a base surface, forming the “Globe” mode of visualization. Peeling the data to form a 2D “map” of the entire coronary tree in its context in the heart constitutes the third mode.
Stabilized display of coronary x-ray image sequences
Author(s):
Robert A. Close;
James S. Whiting;
Xiaolin Da;
Neal L. Eigler
Show Abstract
Display stabilization is a technique by which a feature of interest in a cine image sequence is tracked and then shifted to remain approximately stationary on the display device. Prior simulations indicate that display stabilization with high playback rates ( 30 f/s) can significantly improve detectability of low-contrast features in coronary angiograms. Display stabilization may also help to improve the accuracy of intra-coronary device placement. We validated our automated tracking algorithm by comparing the inter-frame difference (jitter) between manual and automated tracking of 150 coronary x-ray image sequences acquired on a digital cardiovascular X-ray imaging system with CsI/a-Si flat panel detector. We find that the median (50%) inter-frame jitter between manual and automatic tracking is 1.41 pixels or less, indicating a jump no further than an adjacent pixel. This small jitter implies that automated tracking and manual tracking should yield similar improvements in the performance of most visual tasks. We hypothesize that cardiologists would perceive a benefit in viewing the stabilized display as an addition to the standard playback of cine recordings. A benefit of display stabilization was identified in 87 of 101 sequences (86%). The most common tasks cited were evaluation of stenosis and determination of stent and balloon positions. We conclude that display stabilization offers perceptible improvements in the performance of visual tasks by cardiologists.
Four-dimensional modeling of the heart for image guidance of minimally invasive cardiac surgeries
Author(s):
Marcin Wierzbicki;
Maria Drangova;
Gerard Guiraudon;
Terry Peters
Show Abstract
Minimally invasive surgery of the beating heart can be associated with two major limitations: selecting port locations for optimal target coverage from x-rays and angiograms, and navigating instruments in a dynamic and confined 3D environment using only an endoscope. To supplement the current surgery planning and guidance strategies, we continue developing VCSP - a virtual reality, patient-specific, thoracic cavity model derived from 3D pre-procedural images. In this work, we apply elastic image registration to 4D cardiac images to model the dynamic heart. Our method is validated on two image modalities, and for different parts of the cardiac anatomy. In a helical CT dataset of an excised heart phantom, we found that the artificial motion of the epicardial surface can be extracted to within 0.93 ± 0.33 mm. For an MR dataset of a human volunteer, the error for different heart structures such as the myocardium, right and left atria, right ventricle, aorta, vena cava, and pulmonary artery, ranged from 1.08 ± 0.18 mm to 1.14 ± 0.22 mm. These results indicate that our method of modeling the motion of the heart is not only easily adaptable but also sufficiently accurate to meet the requirements for reliable cardiac surgery training, planning, and guidance.
Temporal and spatial resolution required for imaging myocardial function
Author(s):
Christian Dieter Eusemann;
Richard A. Robb
Show Abstract
4-D functional analysis of myocardial mechanics is an area of significant interest and research in cardiology and vascular/interventional radiology. Current multidimensional analysis is limited by insufficient temporal resolution of x-ray and magnetic resonance based techniques, but recent improvements in system design holds hope for faster and higher resolution scans to improve images of moving structures allowing more accurate functional studies, such as in the heart. This paper provides a basis for the requisite temporal and spatial resolution for useful imaging during individual segments of the cardiac cycle. Multiple sample rates during systole and diastole are compared to determine an adequate sample frequency to reduce regional myocardial tracking errors. Concurrently, out-of-plane resolution has to be sufficiently high to minimize partial volume effect. Temporal resolution and out-of-plane spatial resolution are related factors that must be considered together. The data used for this study is a DSR dynamic volume image dataset with high temporal and spatial resolution using implanted fiducial markers to track myocardial motion. The results of this study suggest a reduced exposure and scan time for x-ray and magnetic resonance imaging methods, since a lower sample rate during systole is sufficient, whereas the period of rapid filling during diastole requires higher sampling. This could potentially reduce the cost of these procedures and allow higher patient throughput.
Atrial myocardium model extraction
Author(s):
Bernhard Pfeifer;
Friedrich Hanser;
Christoph Hintermueller;
Robert Modre-Osprian;
Gerald Fischer;
Michael Seger;
Christian Kremser;
Bernhard Tilg
Show Abstract
We present two approaches for reconstructing a patient’s atrial myocardium from morphological image data.
Both approaches are based on a segmentation of the left and right atrial blood masses which mark the inner
border of the atrial myocardium. The outer border of the atrial myocardium is reconstructed differently by the
two approaches. The surface manipulation approach is based on a triangle manipulation procedure while the
label-voxel-field approach adds or deletes label-voxels of the segmented blood mass labelset. Both approaches
yield models of a patient’s atrial myocardium that qualify for further applications. The obtained atrial models
have been implemented many times in the construction of a patient’s volume conductor model needed for solving
the electrocardiographic inverse problem. The label-voxel-field approach has to be favored because of its superior
performance and ability of implementation in a segmentation pipeline.
Usefulness of image morphing techniques in cancer treatment by conformal radiotherapy
Author(s):
Hussein Atoui;
David Sarrut;
Serge Miguet
Show Abstract
Conformal radiotherapy is a cancer treatment technique, that targets high-energy X-rays to tumors with minimal
exposure to surrounding healthy tissues. Irradiation ballistics is calculated based on an initial 3D Computerized
Tomography (CT) scan. At every treatment session, the random positioning of the patient, compared
to the reference position defined by the initial 3D CT scan, can generate treatment inaccuracies. Positioning
errors potentially predispose to dangerous exposure to healthy tissues as well as insufficient irradiation to the
tumor. A proposed solution would be the use of portal images generated by Electronic Portal Imaging Devices
(EPID). Portal images (PI) allow a comparison with reference images retained by physicians, namely Digitally
Reconstructed Radiographs (DRRs). At present, physicians must estimate patient positional errors by visual
inspection. However, this may be inaccurate and consumes time. The automation of this task has been the
subject of many researches. Unfortunately, the intensive use of DRRs and the high computing time required
have prevented real time implementation. We are currently investigating a new method for DRR generation that
calculates intermediate DRRs by 2D deformation of previously computed DRRs. We approach this investigation
with the use of a morphing-based technique named mesh warping.
Accuracy of needle implantation in brachytherapy using a medical AR system: a phantom study
Author(s):
Stefan Wesarg;
Evelyn A. Firle;
Bernd Schwald;
Helmut Seibert;
Pawel Zogal;
Sandra Roeddiger
Show Abstract
Brachytherapy is the treatment method of choice for patients with a tumor relapse after a radiation therapy
with external beams or tumors in regions with sensitive surrounding organs-at-risk, e. g. prostate tumors. The
standard needle implantation procedure in brachytherapy uses pre-operatively acquired image data displayed as
slices on a monitor beneath the operation table. Since this information allows only a rough orientation for the
surgeon, the position of the needles has to be verified repeatedly during the intervention.
Within the project Medarpa a transparent display being the core component of a medical Augmented
Reality (AR) system has been developed. There, pre-operatively acquired image data is displayed together with
the position of the tracked instrument allowing a navigated implantation of the brachytherapy needles. The
surgeon is enabled to see the anatomical information as well as the virtual instrument in front of the operation
area. Thus, the Medarpa system serves as "window into the patient".
This paper deals with the results of first clinical trials of the system. Phantoms have been used for evaluating
the achieved accuracy of the needle implantation. This has been done by comparing the output of the system
(instrument positions relative to the phantom) with the real positions of the needles measured by means of a
verification CT scan.
Improved automated brachytherapy seed localization in trans-urethral ultrasound data
Author(s):
David R. Holmes III;
Richard A. Robb
Show Abstract
The utility of Trans-urethral ultrasound(TUUS) for permanent prostate brachytherapy is dependent on the quality of the data acquired. Previous work on Trans-urethral ultrasound suggested that TUUS acquires higher quality images than Trans-rectal ultrasound. Manual segmentation of TUUS data limits the utility of TUUS intra-operative, however, automated segmentation methods can be used to reduce the processing time. This research describes a segmentation paradigm which incorporates a priori information about the implanted seeds in order to provide a reasonable segmentation. The segmentation framework incorporates information about the size, shape, and orientation of the seeds. The paradigm integrates this information using fuzzy inference rules. The results show that the automated method is at least 4 times faster than manual segmentation. The sensitivity and specificity of the method were 76% and 93%, respectively. The decrease sensitivity is acceptable because it is the specificity which is more critical in determining if adequate dose has been delivered. Future work with this method will increase accuracy by incorporating additional information about the seeds.
Robotic-aided 3D TRUS guided intraoperative prostate brachytherapy
Author(s):
Zhouping Wei;
Gang Wan;
Lori Gardi;
Donal B. Downey;
Aaron Fenster
Show Abstract
We have developed a robotic aided 3D transrectal ultrasound (TRUS) guided, intraoperative prostate brachytherapy. This system allows brachytherapy needles to be inserted into the prostate along various trajectories including oblique to avoid pubic arch interference. We unified the robotic coordinate system with the 3D TRUS image coordinate system. In addition, we also hdeveloped the method to automatically detect the needle in TRUS images for oblique insertion. We have evaluated our prototype system using prostate phantoms in terms of different needle insertion depths and the distances of the needle from the TRUS transducer. We have shown that our robotic aided 3D TRUS guided system was capable of placing the needle tip with approximately 0.74 mm ± 0.24 mm accuracy at a target identified in the 3D TRUS image. Brachytherapy accuracy was tested by dropping 0.8 mm beads into prostate phantoms via various angles up to ± 20°. Our results showed that the bead-dropping accuracy was 2.59 mm ± 0.76 mm with the error due to the needle deflection caused by the needle's bevel.
Fluoroscopy to ultrasound image registration using implanted seeds as fiducials during permanent prostate brachytherapy
Author(s):
Yi Su;
Brian J. Davis;
Michael G. Herman;
Richard A. Robb
Show Abstract
A method using implanted seeds as fiducials to register ultrasound (US) images with fluoroscopic images for prostate
brachytherapy dose analysis is proposed. In a simulation study, transformed point clouds with 154 points were sampled
at different sampling rates with different levels of noise applied and then registered with the original imaging data.
Superior performance in comparison to conventional four point fiducial registration was demonstrated. The root-meansquared-
distance at registration was 0.962mm when 25% of the points were used as fiducials and with noise level at
3mm. A phantom with 64 implanted seeds was scanned by CT at 1.5mm intervals and by step-section US at 2.5mm
intervals. Fluoroscopic images of the phantom were also taken at several different projection angles. Coordinates of
implanted seeds were determined for each imaging modality. CT-US and fluoroscopy-US registration were then carried
out using the implanted seeds as fiducials. Over 90% overlap between the segmented CT prostate volume and US
prostate volume was observed at registration, and the distance between the centers of the registered volumes was 3mm.
The mean distance between the seed coordinates at registration was 2.5mm for CT and US, and 3mm for fluoroscopy
and US. These results suggest that registration of fluoroscopic images with US images of the prostate can be effectively
accomplished by using implanted seeds as fiducials. Consequently, accurate US-fluoroscopic image registration should
facilitate intraoperative radiation dosimetry for permanent prostate brachytherapy.
3D guide wire tracking for navigation in endovascular interventions
Author(s):
Shirley A.M. Baert;
Theo van Walsum;
Wiro J. Niessen
Show Abstract
A method is presented to track the guide wire during endovascular interventions and to visualize it in 3D, together with the vasculature of the patient. The guide wire is represented by a 3D spline whose position is optimized using internal and external forces. For the external forces, the 3D spline is projected onto the biplane projection images that are routinely acquired. Feature images are constructed based on the enhancement of line-like structures in the projection images. A threshold is applied to this image such that if the probability of a pixel to be part of the guide wire is sufficiently high this feature image is used, whereas outside this region a distance transform is computed to improve the capture range of the method. In preliminary experiments, it is shown that some of the problems of the 2D tracking which where presented in previous work can successfully be circumvented using the 3D tracking method.
Hybrid tracking system for flexible endoscopes
Author(s):
Johann Hummel D.V.M.;
Michael Figl;
Wolfgang Birkfellner;
Christopher Ede;
Rudolf Seemann;
Helmar Bergmann
Show Abstract
With the miniaturization of electromagnetic tracking systems (EMTS) the range of possible applications in
image guided therapy was extending. A diameter smaller than 1 mm allows for mounting these sensors into
the working channel of flexible endoscopes for navigation within the body. Knowing the exact position of the
instrument with respect to the patient’s position preoperative CT or MR images can simplify and ease navigation
during various interventions. The Aurora EMTS seems to be an ideal choice for this purpose. However, using
this system exhibits an important limitation: the sensor offers just 5 degrees of freedom (DOF) which means that
rotations round the axis of the sensor cannot be measured. To overcome this restriction we used an additional
optical tracking system (OTS) which is calibrated to deliver the missing DOF.
To evaluate the suitability of our new navigation system we measured the Fiducial Registration Error (FRE)
of the diverse registrations and the Target Registration Error (TRE) for the complete transformation from the US
space to the CT space. The FRE for the ultrasound calibration amounted to 3.2 mm±2.2 mm, resulting from
10 calibration procedures. For the transformation from the OTS reference system to the EMTS emitter space
we found an average FRE of 0.8 mm±0.2 mm. The FRE for the CT registration was 1.0 mm±0.3 mm.
The TRE was found to be 5.5 mm± 3.2 mm.
3D motion tracking of pulmonary lesions using CT fluoroscopy images for robotically assisted lung biopsy
Author(s):
Sheng Xu;
Gabor Fichtinger;
Russell H. Taylor;
Kevin R. Cleary
Show Abstract
We are developing a prototype system for robotically assisted lung biopsy. For directing the robot in biopsy needle placement, we propose a non-invasive algorithm to track the 3D position of the target lesion using 2D CT fluoroscopy image sequences. A small region of the CT fluoroscopy image is registered to a corresponding region in a pre-operative CT volume to infer the position of the target lesion with respect to the imaging plane. The registration is implemented in a coarse to fine fashion. The local deformation between the two regions is modeled by an affine transformation. The sum-of-squared-differences (SSD) between the two regions is minimized using the Levenberg-Marquardt method. Multi-resolution and multi-start strategies are used to avoid local minima. As a result, multiple candidate transformations between the two regions are obtained, from which the true transformation is selected by similarity voting. The true transformation of each frame of the CT fluoroscopy image is then incorporated into a Kalman filter to predict the lesion’s position for the next frame. Tests were completed to evaluate the performance of the algorithm using a respiratory motion simulator and a swine animal study.
3D surgical planning and navigation for CMF surgery
Author(s):
Jonas Chapuis;
Tobias Rudolph;
Blake Borgesson;
Elena De Momi;
Ion P. Pappas;
Wok Hallermann;
Alexander Schramm;
Marco Caversaccio
Show Abstract
In this paper we describe a system for corrective and reconstructive CMF surgery that allows planning of bone segment relocations in 3D and transfer of the goal positions into an intra-operative navigation module, which provides guidance to realize the planned movement. In addition, the pre-operative planning module offers functions of mirroring and allows insertion of distraction devices. We present three clinical cases of CMF surgical procedures planned a posteriori with our application: bimaxillary realignment, involving subcondylar osteotomy of the mandible and LeFort I osteotomy, secondary orbital reconstruction and mandibular reconstruction.
Computer-aided distal locking guidance of intramedullary nail by x-ray image analysis
Author(s):
Nongluk Covavisaruch;
Kamthon Simmami;
Wiwat Vatanawood;
Winyou Ratanachai M.D.
Show Abstract
Distal locking of intramedullary nail inside a patient’s broken bone is a difficult step in an orthopaedic surgery. It is
hard not only because surgeons must locate the direction and align two distal holes in a 3D space by using 2D x-ray
images, but also because the intramedullary nail can twist in unknown 3D direction and position during an operation.
This process normally takes a long time, heavily uses x-ray radiation and hence exposes surgeons and patients to high
doses of x-ray radiation. Longer surgical duration also increases the risk of high blood loss and prolonged anesthesia
towards the patient. This research proposes a methodology to help reduce the usage of x-ray radiation, and to also
simplify the distal locking process, through the utilization of simple devices along with x-ray image analysis.
Accuracy assessment and interpretation for optical tracking systems
Author(s):
Andrew D. Wiles;
David G. Thompson;
Donald D. Frantz
Show Abstract
Highly accurate spatial measurement systems are among the enabling technologies that
have made image-guided surgery possible in modern operating theaters. Assessing the
accuracies of such systems is subject to much ambiguity, though. The underlying
mathematical models that convert raw sensor data into position and orientation
measurements of sufficient accuracy complicate matters by providing measurements
having non-uniform error distributions throughout their measurement volumes.
Users are typically unaware of these issues, as they are usually presented with only
a few specifications based on some "representative" statistics that were themselves
derived using various data reduction methods. As a result, much of the important
underlying information is lost. Further, manufacturers of spatial measurement
systems often choose protocols and statistical measures that emphasize the strengths
of their systems and diminish their limitations. Such protocols often do not reflect
the end users' intended applications very well. Users and integrators thus need to
understand many aspects of spatial metrology in choosing spatial measurement systems
that are appropriate for their intended applications. We examine the issues by
discussing some of the protocols and their statistical measures typically used by
manufacturers. The statistical measures for a given protocol can be affected by many
factors, including the volume size, region of interest, and the amount and type of
data collected. We also discuss how different system configurations can affect the
accuracy. Single-marker and rigid body calibration results are presented, along with
a discussion of some of the various factors that affect their accuracy. Although the
findings presented here were obtained using the NDI Polaris optical tracking systems,
many are applicable to spatial measurement systems in general.
Spatial noise of high-resolution liquid-crystal displays for medical imaging: quantitative analysis, estimation, and compensation
Author(s):
Jiahua Fan;
William J. Dallas;
Hans Roehrig;
Elizabeth A. Krupinski;
Kunal Gandhi;
Malur K. Sundareshan
Show Abstract
Recent developments in Liquid Crystal Display (LCD) technology suggest that they will replace the Cathode Ray Tube (CRT) as the most common softcopy display in the medical arena. But LCDs are far from ideal for medical imaging. One of the problems they possess is spatial noise. This paper presents some work we have conducted recently on spatial noise of high resolution LCDs. The purpose of this work is to explore the properties of spatial noise and the method to reduce them. A high quality CCD camera is used for physical evaluation. Spatial noise properties are analyzed and estimated from the camera images via signal modeling and processing. Noise compensation algorithm based on error diffusion is developed to process images before they are displayed. Some initial results shown in this paper suggest that LCD spatial noise can be eliminated via appropriate processing.
Liquid-crystal displays for medical imaging: a discussion of monochrome versus color
Author(s):
Steven L. Wright;
Ehsan Samei
Show Abstract
A common view is that color displays cannot match the performance of monochrome displays, normally used for diagnostic x-ray imaging. This view is based largely on historical experience with cathode-ray tube (CRT) displays, and does not apply in the same way to liquid-crystal displays (LCDs). Recent advances in color LCD technology have considerably narrowed performance differences with monochrome LCDs for medical applications. The most significant performance advantage of monochrome LCDs is higher luminance, a concern for use under bright ambient conditions. LCD luminance is limited primarily by backlight design, yet to be optimized for color LCDs for medical applications. Monochrome LCDs have inherently higher contrast than color LCDs, but this is not a major advantage under most conditions. There is no practical difference in luminance precision between color and monochrome LCDs, with a slight theoretical advantage for color. Color LCDs can provide visualization and productivity enhancement for medical applications, using digital drive from standard commercial graphics cards. The desktop computer market for color LCDs far exceeds the medical monitor market, with an economy of scale. The performance-to-price ratio for color LCDs is much higher than monochrome, and warrants re-evaluation for medical applications.
In-field evaluation of the modulation transfer function of electronic display devices
Author(s):
Hans Roehrig;
Jerry Gaskill;
Jiahua Fan;
Ananth Poolla;
Chadwick Martin
Show Abstract
This paper describes a CCD camera, which was developed for in-field evaluation of the image quality of electronic display devices like CRTs and LCDs. Contrary to traditional CCD cameras used for image quality evaluation, this camera does not require a sophisticated x-y-z translation stage for mounting and adjustment. Instead it is handheld and pressed by gentle pressure against the display. It is controlled by a software package which was originally developed for the display calibration according to the DICOM 14 GSDF (Grayscale Standard Display Function). This software package controls the camera gain when measurements are made at different display luminance, displays test patterns, performs image analysis and displays the result of the measurements and calculations. Initial work concentrated on the measurement of the MTF of a CRT derived from the line spread function. The obtained MTF deviates only minimally from those obtained with a high performance CCD camera on the same CRT.
A computer-graphic display for real-time operator feedback during interventional x-ray procedures
Author(s):
Kevin Chugh;
Petru Dinu;
Daniel R. Bednarek;
Darold Wobschall;
Stephen Rudin;
Kenneth Hoffmann;
Ron Peterson;
Ming Zeng
Show Abstract
The harmful effects of ionizing radiation, as employed in a variety of medical imaging procedures, have
been well studied and documented. To minimize risk to patients, operators must continually assess the
dose rate and cumulative dose to the patient at each area of exposure. We have developed a computer
graphic dose management display system which provides this operator feedback. The system is comprised
of a signal processing module which reads the state of a fluoroscopy machine, a transmission ionization
chamber for exposure measurement, and a visualization of the patient that displays the current level of
radiation intensity and accumulated dose at every location on the body. The system shows the beam
projection and orientation of the machine and color-coded dose metrics on the patient graphic model in real
time. Additionally, a database system has been incorporated to allow for recording and playback of the
entire procedure.
Enhancement method that provides direct and independent control of fundamental attributes of image quality for radiographic imagery
Author(s):
Mary E. Couwenhoven;
Robert A. Senn;
David H. Foos
Show Abstract
Image processing is used to transform raw digital radiographic image data, captured using CR (computed radiography) and DR (flat panel direct digital radiography) systems into a display-ready form. Ideally, an image-processing algorithm automatically renders an image for display, based on aims derived from observer performance studies. Establishing the rendering aim for different exam types, however, can be complex because the effects on image appearance introduced by the various steps in the rendering process are interdependent. This paper describes a new rendering algorithm that provides orthogonal control, to the first order, of five fundamental attributes of perceived image quality. These attributes are brightness, latitude, detail contrast, sharpness, and appearance of noise. The detail contrast and sharpness can be controlled in a density-dependent manner. The algorithm uses a multifrequency-band decomposition wherein the bands of the decomposition are manipulated, and the reconstructed image is passed through a tone-scale process that prepares the image for display. The rendering method is implemented in software on a workstation that enables interactive control of these image quality attributes in order to facilitate the determination of rendering aims for different exam types.
Geometric modeling of the temporal bone for cochlea implant simulation
Author(s):
Catherine A. Todd;
Fazel Naghdy;
Stephen O'Leary
Show Abstract
The first stage in the development of a clinically valid surgical simulator for training otologic surgeons in performing
cochlea implantation is presented. For this purpose, a geometric model of the temporal bone has been derived from a
cadaver specimen using the biomedical image processing software package Analyze (AnalyzeDirect, Inc) and its
three-dimensional reconstruction is examined. Simulator construction begins with registration and processing of a
Computer Tomography (CT) medical image sequence. Important anatomical structures of the middle and inner ear are
identified and segmented from each scan in a semi-automated threshold-based approach. Linear interpolation between
image slices produces a three-dimensional volume dataset: the geometrical model. Artefacts are effectively eliminated
using a semi-automatic seeded region-growing algorithm and unnecessary bony structures are removed. Once validated
by an Ear, Nose and Throat (ENT) specialist, the model may be imported into the Reachin Application Programming
Interface (API) (Reachin Technologies AB) for visual and haptic rendering associated with a virtual mastoidectomy.
Interaction with the model is realized with haptics interfacing, providing the user with accurate torque and force
feedback. Electrode array insertion into the cochlea will be introduced in the final stage of design.
Modeling the functional repair of nervous tissue in spinal cord injury
Author(s):
Sara M. Mantila;
Jon J. Camp;
Aaron J. Krych;
Richard A. Robb
Show Abstract
Functional repair of traumatic spinal cord injury (SCI) is one of the most challenging goals in modern medicine. The annual incidence of SCI in the United States is approximately 11,000 new cases. The prevalence of people in the U.S. currently living with SCI is approximately 200,000. Exploring and understanding nerve regeneration in the central nervous system (CNS) is a critical first step in attempting to reverse the devastating consequences of SCI. At Mayo Clinic, a preliminary study of implants in the transected rat spinal cord model demonstrates potential for promoting axon regeneration. In collaborative research between neuroscientists and bioengineers, this procedure holds promise for solving two critical aspects of axon repair-providing a resorbable structural scaffold to direct focused axon repair, and delivery of relevant signaling molecules necessary to facilitate regeneration. In our preliminary study, regeneration in the rat's spinal cord was modeled in three dimensions utilizing an image processing software system developed in the Biomedical Imaging Resource at Mayo Clinic. Advanced methods for image registration, segmentation, and rendering were used. The raw images were collected at three different magnifications. After image processing the individual channels in the scaffold, axon bundles, and macrophages could be identified. Several axon bundles could be visualized and traced through the entire volume, suggesting axonal growth throughout the length of the scaffold. Such information could potentially allow researchers and physicians to better understand and improve the nerve regeneration process for individuals with SCI.
Development of surgical simulator based on FEM and deformable volume-rendering
Author(s):
Yoshitaka Masutani;
Yusuke Inoue;
Koichi Ishii;
Nori Kumai;
Fumihiko Kimura;
Ichiro Sakuma
Show Abstract
In this paper, we describe our novel surgical simulation system, which provides FEM-based real-time deformation, interaction by using haptic device, and high-quality visualization of the liver and inner blood vessel structures based on 3D texture-based deformable volume-rendering. Our software system consists of mainly four components of independent processes and threads; (1) 3D texture based volume rendering, (2) haptic device input / output, (3) FEM computation, and (4) inter-process communication management. Tetrahedral meshes for FEM computation and volume-rendering are updated for every frame of image display and deformation. For faster FEM computation, we employed the central-difference method for forced displacement calculation. We implemented our system with dual Pentium Xeon 3GHz PC workstation with 1G byte RAM, a video card with nVIDIA Quadro4 900XGL GPU, and Windows XP Professional OS. As a haptic device, PHANToM desktop was employed. We used liver data of 128x128x128 matrix size as 3D-texture data, which was segmented in abdominal X-ray CT Angiography data set and colored in grayscale and dual-indexed coloring based on radial basis function interpolation. By using window size of 480, we obtained refresh rate of 67 frames/sec for image display and 16 msec for haptic device output. Our preliminary study shows feasibility of surgical simulators with FEM and deformable volume-rendering.
3D imaging and modeling of the middle and inner ear
Author(s):
Fopefolu O. Folowosele;
Jon J. Camp;
Robert H. Brey;
John I. Lane;
Richard A. Robb
Show Abstract
The bones of the middle ear are the smallest bones in the body and are among the most complicated functionally. They are located within the temporal bone - rendering them difficult to access and study. An accurate 3D model can offer an excellent illustration of the complex spatial relationships between the ossicles and the nerves and muscles with which they intertwine. The overall objective was to create an educational module for learning the anatomy of the outer, middle and inner ear from MRI data. Such a teaching tool will provide surgeons, radiologists and audiologists with a detailed self-guided tour of ear anatomy. MRI images of the auditory canal were acquired using a 9 Tesla MR scanner. The acquired images were reformatted along obliquely oriented axes to obtain the desired orientation relative to anatomical planes. An automated segmentation algorithm was applied to the MRI data to separate the cochlea, auditory nerve and semi-circular canals in the inner ear. Semi-automated segmentation was used to separate the middle ear bones. This was necessary in order to detach the malleus from the incus and the tympanic membrane from the malleus, as the boundaries between these structures were not sufficiently distinct in the data. Each structure became an independent object to facilitate its interactive manipulation. Different angles of view of the 3D structures were rendered illustrating the anatomic pathway starting at the tympanic membrane, through the middle ear bones, to the semi-circular canals, cochlea and auditory nerve in the inner ear.
Image-guided laser thermal ablation therapy: a comparison of modeled tissue damage using interventional MR temperature images with tissue response
Author(s):
Michael S. Breen;
Kim Butts;
Lili Chen;
David L. Wilson
Show Abstract
Solid tumors and other pathologies can be treated using laser thermal ablation under interventional magnetic resonance imaging (iMRI) guidance. We developed a model to predict cell death from MR thermometry measurements and applied it to in vivo rabbit brain data. We aligned post-ablation MR lesion images to gradient echo images, from which temperature is derived, using a mutual information registration method. We used the outer boundary of the hyperintense rim in the post-ablation MR lesion image as the boundary for cell death, as verified from histology. Model parameters were simultaneously estimated using an iterative optimization algorithm applied to every interesting voxel in 185 images from multiple experiments having various temperature histories. The model gave a voxel sensitivity and specificity of 86.9% and 98.8%, respectively. Mislabeled voxels typically were within one voxel of the segmented necrotic boundary. This is good evidence that iMRI temperature maps can be used with our model to predict therapeutic regions in real-time.
3D prostate shape modeling from sparsely acquired 2D images using deformable models
Author(s):
Ismail B. Tutar;
Sayan Dev Pathak;
Yongmin Kim
Show Abstract
Intraoperative quality assessment during prostate brachytherapy could improve the clinical outcome by ensuring the delivery of a prescribed tumoricidal radiation dose to the entire prostate gland. Accurate prostate boundary segmentation is an essential first step towards this. Classical segmentation techniques fail to generate a reliable edge map in ultrasound images. Modeling the 3D prostate shape in a deformable model framework could lead to more reliable prostate segmentation since missing information in some parts of the images due to the indistinct prostatic margins could be reconstructed using information in adjacent slices, and the resulting boundary elements could be integrated into a coherent mathematical description. We first experimented with deformable superquadrics to generate 3D surfaces that match the manually-outlined prostate contours. The superquadrics were found to capture the global shape, but had limited capability of modeling local shape variations. Then, closed and tubular surfaces were generated using Fourier descriptors to fit the prostate data. The modeling errors were compared with the disagreement between manual outlines by three experts. The preliminary results from 12 patient data sets show that the Fourier descriptors are capable of generating tubular surfaces that closely match the manual outlines. The minimum number of parameters required to reconstruct a tubular prostate surface with a tolerable error margin is 52.
Indirect interpolation of subcortical structures in the Talairach-Tournoux atlas
Author(s):
Ihar Volkau;
Aamer Aziz M.D.;
Wieslaw L. Nowinski
Show Abstract
We suggest a method to reconstruct intermediate slices of 3D subcortical structures given by slices (2D cross-sections) using shape-based interpolation for sparse slices. This algorithm overcomes some limitations of previous interpolation methods of reconstruction. The method can find the intermediate slice cross-section of 3D structure with discontinuities and can work for non-overlapping contours on different cross-sections. The method used can be considered as "indirect interpolation", because the auxiliary information about structure shape, based on minimum distance to the contour from each point of the image, has been used. Variations of the parameters (bias, connectivity, tension) can help adjust the contour produced to the expected shape. We apply this approach to interpolate subcortical structures in the Talairach-Tournoux brain atlas.
Segmented images and 3D images for studying the anatomical structures in MRIs
Author(s):
Yong Sook Lee;
Min Suk Chung;
Jae Hyun Cho
Show Abstract
For identifying the pathological findings in MRIs, the anatomical structures in MRIs should be identified in advance.
For studying the anatomical structures in MRIs, an education al tool that includes the horizontal, coronal, sagittal MRIs
of entire body, corresponding segmented images, 3D images, and browsing software is necessary. Such an educational
tool, however, is hard to obtain. Therefore, in this research, such an educational tool which helps medical students and
doctors study the anatomical structures in MRIs was made as follows.
A healthy, young Korean male adult with standard body shape was selected. Six hundred thirteen horizontal MRIs of the
entire body were scanned and inputted to the personal computer. Sixty anatomical structures in the horizontal MRIs
were segmented to make horizontal segmented images. Coronal, sagittal MRIs and coronal, sagittal segmented images
were made. 3D images of anatomical structures in the segmented images were reconstructed by surface rendering
method. Browsing software of the MRIs, segmented images, and 3D images was composed.
This educational tool that includes horizontal, coronal, sagittal MRIs of entire body, corresponding segmented images,
3D images, and browsing software is expected to help medical students and doctors study anatomical structures in MRIs.
Visualization of multivariate image data using image fusion and perceptually optimized color scales based on sRGB
Author(s):
Axel Saalbach;
Thorsten Twellmann;
Tim Nattkemper;
Mark White;
Michael Khazen;
Martin O. Leach
Show Abstract
Due to the rapid progress in medical imaging technology, analysis of multivariate image data is receiving increased interest. However, their visual exploration is a challenging task since it requires the integration of information from many different sources which usually cannot be perceived at once by an observer.
Image fusion techniques are commonly used to obtain information from multivariate image data, while psychophysical aspects of data visualization are usually not considered. Visualization is typically achieved by means of device derived color scales. With respect to psychophysical aspects of visualization, more sophisticated color mapping techniques based on device independent (and perceptually uniform) color spaces like CIELUV have been proposed. Nevertheless, the benefit of these techniques is limited by the fact that they require complex color space transformations to account for device characteristics and viewing conditions.
In this paper we present a new framework for the visualization of multivariate image data using image fusion and color mapping techniques. In order to overcome problems of consistent image presentations and color space transformations, we propose perceptually optimized color scales based on CIELUV in combination with sRGB (IEC 61966-2-1) color specification. In contrast to color definitions based purely on CIELUV, sRGB data can be used directly under reasonable conditions, without complex transformations and additional information. In the experimental section we demonstrate the advantages of our approach in an application of these techniques to the visualization of DCE-MRI images from breast cancer research.
3D tumor measurement in cone-beam CT breast imaging
Author(s):
Zikuan Chen;
Ruola Ning
Show Abstract
Cone-beam CT breast imaging provides a digital volume representation of a breast. With a digital breast volume, the immediate task is to extract the breast tissue information, especially for suspicious tumors, preferably in an automatic manner or with minimal user interaction. This paper reports a program for three-dimensional breast tissue analysis. It consists of volumetric segmentation (by globally thresholding), subsegmentation (connection-based separation), and volumetric component measurement (volume, surface, shape, and other geometrical specifications). A combination scheme of multi-thresholding and binary volume morphology is proposed to fast determine the surface gradients, which may be interpreted as the surface evolution (outward growth or inward shrinkage) for a tumor volume. This scheme is also used to optimize the volumetric segmentation. With a binary volume, we decompose the foreground into components according to spatial connectedness. Since this decomposition procedure is performed after volumetric segmentation, it is called subsegmentation. The subsegmentation brings the convenience for component visualization and measurement, in the whole support space, without interference from others. Upon the tumor component identification, we measure the following specifications: volume, surface area, roundness, elongation, aspect, star-shapedness, and location (centroid). A 3D morphological operation is used to extract the cluster shell and, by delineating the corresponding volume from the grayscale volume, to measure the shell stiffness. This 3D tissue measurement is demonstrated with a tumor-borne breast specimen (a surgical part).
Visible Korean human images on MIOS system
Author(s):
Donghwan Har;
Young-Ho Son;
Sung-Won Lee;
Jung Beom Lee
Show Abstract
Basically, photography has the attributes of reason, which encompasses the scientific knowledge of optics, physics and
chemistry, and delicate sensibility of individuals. Ultimately, the photograph pursues “effective
communication.” Communication is “mental and psychosocial exchange mediated by material symbols, such as
language, gesture and picture,” and it has four compositions: “sender, receiver, message and channel.” Recently, a
change in the communication method is on the rise in the field of art and culture, including photography. Until now,
communication was mainly achieved by the form of messages unilaterally transferred from senders to receivers. But,
nowadays, an interactive method, in which the boundary of sender and receiver is obscure, is on the increase. Such new
communication method may be said to have arrived from the desire of art and culture societies, pursuing something new
and creative in the background of utilization of a variety of information media.
The multi-view screen we developed is also a communication tool capable of effective interaction using photos or
motion pictures. The viewer can see different images at different locations. It utilizes the basic lenticular characteristics,
which have been used in printing. Each motion picture is displayed on the screen without crosstalk. The multi-view
screen is different in many aspects from other display media, and is expected to be utilized in many fields, including
advertisement, display and education.
Generalized dynamic range compression algorithm for visualization of chest CT images
Author(s):
Shoji Hara;
Kazuo Shimura;
Takefumi Nagata
Show Abstract
We formulated a new dynamic range compression (DRC) processing algorithm that can be applied to chest CT images. This new DRC processing algorithm was based on an existing DRC processing algorithm. The new DRC processing algorithm, which we named “Generalized DRC processing,” is categorized as shift variant image processing and can explicitly utilize the results of anatomical region recognition. In addition, the application of the method is not restricted to the DRC. The method can enhance high frequency signals only in the lung due to its shift variant characteristics. Therefore, higher image quality than conventional USM is obtained. When using the Generalized DRC processing for chest CT images, the representation of soft tissues will be improved by roughly recognizing the lung region without affecting the density and contrast of the lung region. Unlike the conventional double gamma method, our method significantly reduces artifacts. In recent years, the reading volume of chest CT images is greatly increasing. In view of this we propose this method, which reduces the number of windowing on a viewer. We believe that this will improve the total reading efficiency, and especially, will allow more efficient lung cancer CT screening.
Toward realistic radiofrequency ablation of hepatic tumors 3D simulation and planning
Author(s):
Caroline Villard;
Luc Soler;
Afshin Gangi;
Didier Mutter;
Jacques Marescaux
Show Abstract
Radiofrequency ablation (RFA) has become an increasingly used technique in the treatment of patients with unresectable hepatic tumors. Evaluation of vascular architecture, post-RFA tissue necrosis prediction, and the choice of a suitable needle placement strategy using conventional radiological techniques remain difficult. In an attempt to enhance the safety of RFA, a 3D simulator and treatment planning tool, that simulates the necrosis of the treated area, and proposes an optimal placement for the needle, has been developed.
From enhanced spiral CT scans with 2 mm cuts, 3D reconstructions of patients with liver metastases are automatically generated. Virtual needles can be added to the 3D scene, together with their corresponding zones of necrosis that are displayed as a meshed spheroids representing the 60° C isosurface. The simulator takes into account the cooling effect of local vessels greater than 3mm in diameter, making necrosis shapes more realistic. Using a voxel-based algorithm, RFA spheroids are deformed following the shape of the vessels, extended by an additional cooled area. This operation is performed in real-time, allowing updates while needle is adjusted. This allows to observe whether the considered needle placement strategy would burn the whole cancerous zone or not.
Planned needle positioning can also be automatically generated by the software to produce complete destruction of the tumor with a 1 cm margin, with maximum respect of the healthy liver and of all major extrahepatic and intrahepatic structures to avoid. If he wishes, the radiologist can select on the skin an insertion window for the needle, focusing the research of the trajectory.
Development of remote surgical navigation and biopsy needle guidance system using Open-MRI and high-speed network
Author(s):
Yasuhiko Okura;
Yasushi Matsumura M.D.;
Shigeki Kuwata;
Hiroshi Takeda M.D.
Show Abstract
This study describes a remote surgical guidance and navigation system developed for surgery using “Open-MRI” and
high-speed network. We connected Osaka University Hospital and Kawasaki Hospital which has deployed Open-
MRI with high speed IP over ATM network. The distance between two hospitals is approximately 50 km. Two
video cameras were installed with an angle of 40 degrees on an open-MRI gantry to obtain intraoperative images.
Two pairs of CODEC (AD/DA converter) were equipped on the network to transfer both images and sound in real
time. A pointer system to indicate a region on an image was also developed. MRI images obtained by Open-MRI
were transferred to a 3D workstation in Osaka University Hospital. The system was designed for a senior surgeon in
Osaka University to advise regarding accurate needle direction for a remote patient by checking the reconstructed 3D
images and schemata shown by the navigation software. The schemata were also superimposed on intraoperative
images from two cameras, and the superimposed images were sent back to Kawasaki Hospital. This system allowed
a surgeon in the operation room at Kawasaki Hospital to accurately view navigation schema under supervision by a
senior surgeon in a remote university hospital with superimposion of intraoperative images. The pointer system
allowed both doctors to share intraoperative images during a virtual-real surgical operation. A successful biopsy
case using this newly developed system illustrates the effectiveness of this system.
FEM-based simulation of tumor growth in medical image
Author(s):
Shuqian Luo;
Ying Nie
Show Abstract
Brain model has found wide applications in areas including surgical-path planning, image-guided surgery systems, and virtual medical environments. In comparison with the modeling of normal brain anatomy, the modeling of anatomical abnormalities appears to be rather weak. Particularly, there are considerable differences between abnormal brain images and normal brain images, due to the growth of brain tumor. In order to find the correspondence between abnormal brain images and normal ones, it is necessary to make an estimation or simulation of the brain deformation.
In this paper, a deformable model of brain tissue with both geometric and physical nonlinear properties based on finite element method is presented. It is assumed that the brain tissue are nonlinearly elastic solids obeying the equations of an incompressible nonlinearly elastics neo-Hookean model. we incorporate the physical inhomogeneous of brain tissue into our FEM model. The non-linearity of the model needs to solve the deformation of the model using an iteration method. The Updated Lagrange for iteration is used. To assure the convergence of iteration, we adopt the fixed arc length method.
This model has advantages over those linear models in its more real tissue properties and its capability of simulating more serious brain deformation.
The inclusion of second order displacement items into the balance and geometry functions allows for the estimation of more serious brain deformation. We referenced the model presented by Stelios K so as to ascertain the initial position of tumor as well as our tumor model definition. Furthermore, we expend it from 2-D to 3-D and simplify the calculation process.
Haptic interface of web-based training system for interventional radiology procedures
Author(s):
Xin Ma;
Yiping Lu;
KiaFock Loe;
Wieslaw L. Nowinski
Show Abstract
The existing web-based medical training systems and surgical simulators can provide affordable and accessible medical training curriculum, but they seldom offer the trainee realistic and affordable haptic feedback. Therefore, they cannot offer the trainee a suitable practicing environment. In this paper, a haptic solution for interventional radiology (IR) procedures is proposed. System architecture of a web-based training system for IR procedures is briefly presented first. Then, the mechanical structure, the working principle and the application of a haptic device are discussed in detail. The haptic device works as an interface between the training environment and the trainees and is placed at the end user side. With the system, the user can be trained on the interventional radiology procedures - navigating catheters, inflating balloons, deploying coils and placing stents on the web and get surgical haptic feedback in real time.
Integrated registration and visualization of MR and PET brain images
Author(s):
Helen Hong;
Heewon Kye;
Yeong-Gil Shin
Show Abstract
Different imaging modalities give insight to vascular, anatomical and functional information that assists diagnosis and
treatment planning in medicine. Depending on the clinical requirement, it is often not sufficient to consider anatomical
and functional information separately but to superimpose images of different modalities. However it would often
provide unreliable results since functional modalities have low sampling resolution. In this paper, we present a novel
technique of improving an image fusion quality and speed by integrating voxel-based registration and consecutive
visualization. In the first part, we discuss a voxel-based registration using mutual information including gradient
measure to consider spatial information in the images and thereby provide a much more general and reliable measure. In
the second part, we propose a volume rendering technique for generating high-quality images rapidly without
specialized hardware. Fusion of MR and PET brain images are presented for visual validation for the proposed methods.
Our method offers a robust technique to fuse anatomical and functional modalities which allow direct functional to
structural correlation analysis.
Registration, segmentation, and visualization of confocal microscopy images of arterial thrombus
Author(s):
Ishita Garg;
Jon J. Camp;
Robert McBane;
Waldemar Wysokinski;
Richard A. Robb
Show Abstract
Arterial thrombosis causes death or paralysis of an organ, as it migrates to and localizes in different parts of the body. Massive pulmonary emboli cause 50,000 deaths per year. The cause and origin of arterial thrombosis is not well understood nor objectively characterized. The object of this study was to investigate the microscopic structure of arterial thrombus to better understand this pathology. Confocal microscopy cross-sectional images of an embolized thrombus in the coronary artery were obtained. Adjacent pairs of sections were stained with two different stains, fibrin and CD61, to reveal mutually complementary information. The very thin adjacent slices were treated as one slice.
Adjacent slices were registered by a combination of manual and automatic techniques using Analyze software developed in the Biomedical Imaging Resource at Mayo. After smoothing the images with a median filter, the CD61 and fibrin stained section images were used together to segment the tissues by multispectral classification. The image volume was classified into background, platelets and surrounding tissue, and thrombus. The segmented volume was then rendered for visualization and analysis of structure of the thrombus in three dimensions. Preliminary results are promising. Such correlation of structural and histological information may be helpful in determining the origin of the thrombus.
GMIP: generalized maximum intensity projection
Author(s):
George J. Grevera;
Jayaram K. Udupa
Show Abstract
We describe a generalization of the ubiquitous Maximum Intensity Projection (MIP) method of volume visualization, the Generalized MIP (GMIP). It combines the classification step employed by volume and surface rendering with the maximum intensity projection of the MIP algorithm. We compare and contrast this new algorithm with traditional surface and volume rendering methods on a variety of data sets and demonstrate how useful visualization of structures often considered to be challenging to visualize can be efficiently created without explicit hard or fuzzy segmentation. Briefly, this new technique allows the user to arbitrarily specify a range of intensity values of interest which are then mapped to the highest intensities in the scene while other intensity values are assigned other, lower values. This allows the user to not only restrict the range of values that are actually projected but to modify them as well. Whereas the MIP method projects the maximum intensity value along a particular line, the GMIP uses a function to transform the intensity values first. The maximum of these transformed intensity values along a particular line is then projected in the GMIP. This new method is computationally fast and is a conceptually straightforward extension of the familiar MIP method but provides a great deal of new capabilities. For example, GMIP allows one to render soft tissue structures in CT and MRI and even low intensity structures such as airway in CT, and bone in MRI. We compare and contrast this new method with other well known volume visualization methods.
Development of a percutaneous optical imaging system for tracking vascular gene expression: a feasibility study using human tissuelike phantoms
Author(s):
Sourav Kumar Kar;
Ananda Kumar;
Xiaoming Yang
Show Abstract
Noninvasive tracking of vascular gene delivery and expression forms an important part of successfully implementing vascular gene therapy methods for the treatment of atherosclerosis and various cardiovascular disorders. While ultrasound and MR imaging have shown promise in the monitoring of gene delivery to the vasculatures, optical imaging has shown promise for tracking gene expression. Optical imaging using bioreporter genes like Green Fluorescent Protein (GFP), Red Fluorescent Protein (RFP) and Luciferase to track and localize the therapeutic gene have helped provide an in vivo detection method of the process. The usage of GFP and RFP entails the detection of the fluorescent signal emitted by them on excitation with light of appropriate wavelength. We have developed a novel percutaneous optical imaging system that may be used for in vivo tracking vascular fluorescent gene expression in deep-seated vessels. It is based on the detection of the fluorescent signal emitted from GFP tagged cells. This phantom study was carried out to investigate the performance of the optical imaging system and gain insights into its performance record and study improvisation possibilities.
Accelerating virtual surgery simulation for congenital aural atresia
Author(s):
Bin Li;
Zigang Wang;
Eric Smouha;
Dongqing Chen;
Zhengrong Liang
Show Abstract
In this paper, we proposed a new efficient implementation for simulation of surgery planning for congenital aural
atresia. We first applied a 2-level image segmentation schema to classify the inner ear structures. Based on it, several
3D texture volumes were generated and sent to graphical pipeline on a PC platform. By exploiting the texturingmapping
capability on the PC graphics/video board, a 3D image was created with high quality showing the accurate
spatial relationships of the complex surgical anatomy of congenitally atretic ears. Furthermore, we exploited the
graphics hardware-supported per-fragment function to perform the geometric clipping on 3D volume data to
interactively simulate the procedure of surgical operation. The result was very encouraging.
Mitotic cell recognition with hidden Markov models
Author(s):
Greg M. Gallardo;
Fuxing Yang;
Fiorenza Ianzini;
Michael Mackey;
Milan Sonka
Show Abstract
This work describes a method for detecting mitotic cells in time-lapse microscopy images of live cells. The image sequences are from the Large Scale Digital Cell Analysis System (LSDCAS) at the University of Iowa. LSDCAS is an automated microscope system capable of monitoring 1000 microscope fields over time intervals of up to one month. Manual analysis of the image sequences can be extremely time consuming. This work is part of a larger project to automate the image sequence analysis. A three-step approach is used. In the first step, potential mitotic cells are located in the image sequences. In the second step, object border segmentation is performed with the watershed algorithm. Objects in adjacent frames are grouped into object sequences for classification. In the third step, the image sequences are converted to feature vector sequences. The feature vectors contain spatial and temporal information. Hidden Markov Models (HMMs) are used to classify the feature vector sequences into dead cells, cell edges, and dividing cells. Discrete and continuous HMMs were trained on 500 sequences. The discrete HMM recognition rates were 62% for dead cells, 77% for cell edges, and 75% for dividing cells. The continuous HMM results were 68%, 88% and 77%.
Color-coded depth information in volume-rendered magnetic resonance angiography
Author(s):
Orjan Smedby;
Karin Edsborg;
John Henriksson
Show Abstract
Magnetic Resonance Angiography (MRA) and Computed Tomography Angiography (CTA) data are usually presented using Maximum Intensity Projection (MIP) or Volume Rendering Technique (VRT), but these often fail to demonstrate a stenosis if the projection angle is not suitably chosen. In order to make vascular stenoses visible in projection images independent of the choice of viewing angle, a method is proposed to supplement these images with colors representing the local caliber of the vessel. After preprocessing the volume image with a median filter, segmentation is performed by thresholding, and a Euclidean distance transform is applied. The distance to the background from each voxel in the vessel is mapped to a color. These colors can either be rendered directly using MIP or be presented together with opacity information based on the original image using VRT. The method was tested in a synthetic dataset containing a cylindrical vessel with stenoses in varying angles. The results suggest that the visibility of stenoses is enhanced by the color information. In clinical feasibility experiments, the technique was applied to clinical MRA data. The results are encouraging and indicate that the technique can be used with clinical images.
Projection models for stereo display of chest CT
Author(s):
Xiao Hui Wang;
Walter F. Good;
Carl R. Fuhrman;
Jules H. Sumkin;
Cynthia A. Britton;
Thomas E. Warfel;
David Gur
Show Abstract
The widespread adoption of chest CT for lung cancer screening will greatly increase the workload of chest radiologists. Contributing to this effort is the need for radiologists to differentiate between localized nodules and slices through linear structures such as blood vessels, in each of a large number of slices acquired for each subject. To increase efficiency and accuracy, thin slices can be combined to provide thicker slabs for presentation, but the resulting superposition of tissues can make it more difficult to detect and characterize smaller nodules. The stereo display of a stack of thin CT slices may be able to clarify three-dimensional structures, while avoiding the loss of resolution and ambiguities due to tissue superposition.
The current work focuses on the development and evaluation of stereo projection models that are appropriate for chest CT. As slices are combined into a three dimensional structure, maximum image intensity, which is limited by the display, must be preserved. But, compositing methods that effectively average slices together typically reduce contrast of subtle nodules. For monoscopic viewing, orthographic maximum-intensity projection (MIP), of thick slabs, has been employed to overcome this effect, but this method provides no information of depth or of the geometrical relationships between structures. Our comparison of various rendering options indicates that a stereographic perspective transformation, used in conjunction with a compositing model that combines maximum-intensity projection with an appropriate brightness weighting function, shows promise for this application. The main drawback uncovered was that, for the images used in this study, the lung volume was undersampled in the z-direction, resulting in certain unavoidable image artifacts.
Automatic path-planning algorithm maximizing observation area for virtual colonoscopy
Author(s):
Dong-Goo Kang;
Jong Beom Ra
Show Abstract
To navigate a colon lumen, a proper camera path should be generated prior to the navigation. Conventional path-planning algorithms try to find an accurate and robust centerline by assuming that a centerline of colon lumen is the best choice for camera path. For efficient and reliable navigation, however, the centerline may not minimize unobservable area from the camera path. In this paper, we first define a new coverage measure reflecting the temporal visibility. And based on this measure, a fast and efficient path-planning algorithm is proposed to increase the visibility coverage. The proposed algorithm first simplifies the object surface using the centerline determined. Then, camera view positions and directions are estimated to maximize the observable surface. Simulation results prove that the proposed algorithm provides a better coverage rate than the conventional one without a significant increase of additional computation.
A novel machine interface for scaled telesurgery
Author(s):
Samuel T. Clanton;
David C. Wang;
Yoky Matsuoka;
Damion M. Shelton;
George D. Stetten
Show Abstract
We have developed a system architecture that will allow a surgeon to employ direct hand-eye coordination to conduct medical procedures in a remote microscopic environment. In this system, a scaled real-time video image of the workspace of a small robotic arm, taken from a surgical microscope camera, is visually superimposed on the natural workspace of a surgeon via a half-silvered mirror. The robot arm holds a small tool, such as a microsurgical needle holder or microsurgical forceps, and the surgeon grasps a second tool connected to a position encoder, in this case a second robot arm. The views of the local and remote environments are superimposed such that the tools in the local and remote environments are visually merged. The position encoder and small robot arm are linked such that movement of the tool by the operator produces scaled-down movement by the small robot tool. To the surgeon, it seems that his hands and the tool he or she is holding is moving and interacting with the remote environment, which is really microscopic and at a distance. Our current work focuses on using a position-controlled master-slave robot linkage of two 3 degree of freedom haptic devices, and we are pursuing the use of a 6-to-7 degree of freedom master-slave linkage to produce more realistic interaction.
Platform for intraoperative analysis of video streams
Author(s):
Logan Clements;
Robert L. Galloway Jr.
Show Abstract
Interactive, image-guided surgery (IIGS) has proven to increase the specificity of a variety of surgical procedures. However, current IIGS systems do not compensate for changes that occur intraoperatively and are not reflected in preoperative tomograms. Endoscopes and intraoperative ultrasound, used in minimally invasive surgery, provide real-time (RT) information in a surgical setting. Combining the information from RT imaging modalities with traditional IIGS techniques will further increase surgical specificity by providing enhanced anatomical information. In order to merge these techniques and obtain quantitative data from RT imaging modalities, a platform was developed to allow both the display and processing of video streams in RT. Using a Bandit-II CV frame grabber board (Coreco Imaging, St. Laurent, Quebec) and the associated library API, a dynamic link library was created in Microsoft Visual C++ 6.0 such that the platform could be incorporated into the IIGS system developed at Vanderbilt University. Performance characterization, using two relatively inexpensive host computers, has shown the platform capable of performing simple image processing operations on frames captured from a CCD camera and displaying the processed video data at near RT rates both independent of and while running the IIGS system.
Dynamic organ modeling for minimally-invasive cardiac surgery
Author(s):
Stanislaw Szpala;
Marcin Wierzbicki;
Gerard Guiraudon;
Terry Peters
Show Abstract
While most currently available minimally invasive robotically assisted cardiac surgical systems do not employ 3D image guidance, such support can be generated using pre operative images such as CT. Previously we demonstrated a virtual model of the thorax with simulated surgical instruments, and a pulsating virtual model of the coronary arteries. In this paper we report the overlay of optical endoscopic images of a beating heart phantom with CT-based dynamic volumetric images of the phantom. Spatial matching is obtained through optical tracking of the endoscope and of the phantom, while time synchronization of the display of the model utilizes ECG gating. The spatial accuracy between the optical and virtual images varies from about 0.8 mm to -2.6 mm, while the time discrepancy depends on the frame-rate at which the virtual model is refreshed, and is typically 50-100 ms. Although the CT-based dynamic images are sufficient for animation of the model, artefacts associated with the image registration prevent seamless animation. Instead, to reconstruct the various phases of heart pulsation, we used a high-quality semi-static image of the diastolic phase of the phantom, and warped it to match the CT-based images corresponding to other phases of the heart pulsation.
Electromagnetically tracked placement of a peripherally inserted central catheter
Author(s):
Laura Sacolick;
Neilesh Patel;
Jonathan Tang;
Elliot Levy;
Kevin R. Cleary
Show Abstract
This paper describes a computer program to utilize electromagnetic tracking guidance during insertion of peripherally inserted central catheters. Placement of a Peripherally Inserted Central Catheter (PICC) line is a relatively simple, routine procedure in which a catheter is inserted into the veins of the lower arm and threaded up the arm to the vena cava to sit just above the heart. However, the procedure requires x-ray verification of the catheter position and is usually done under continuous fluoroscopic guidance. The computer program is designed to replace fluoroscopic guidance in this procedure and make PICC line placement a bedside procedure. This would greatly reduce the time and resources dedicated to this procedure. The physician first goes through a quick registration procedure to register the patient space with the computer screen coordinates. Once registration is completed, the program provides a continuous, real-time display of the position of the catheter tip overlaid on an x-ray image of the patient on an adjacent computer screen. Both the position and orientation of the catheter tip is shown. The display is very similar to that shown when using fluoroscopy.
Respiratory motion tracking of skin and liver in swine for Cyberknife motion compensation
Author(s):
Jonathan Tang;
Sonja Dieterich;
Kevin R. Cleary
Show Abstract
In this study, we collected respiratory motion data of external skin markers and internal liver fiducials from several swine. The POLARIS infrared tracking system was used for recording reflective markers placed on the swine’s abdomen. The AURORA electromagnetic tracking system was used for recording 2 tracked needles implanted into the liver. This data will be used to develop correlation models between external skin movement and internal organ movement, which is the first step towards the ability to compensate for respiratory movement of the lesion. We are also developing a motion simulator for validation of our model and dose verification of mobile lesions in the CYBERKNIFE Suite. We believe that this research could provide significant information towards the development of precise radiation treatment of mobile target volumes.
Visualization of x-ray microtomography data for a human tooth atlas
Author(s):
Allen Seifert;
Michael J. Flynn;
Kevin Montgomery;
Paul Brown
Show Abstract
Three-dimensional x-ray microtomography is used in this work to assess the internal morphology and mineral density of human tooth specimens. Of particular interest is the demonstration of the character of the distal root canal morphology, which can be as small as 10 microns. Human teeth are individually embedded in a low atomic number material. Each tooth is then identically scanned on an advanced design bench-top cone-beam microtomography system under controlled conditions. The specimens are scanned using an 80 kVp technique and a CsI(Tl) scintillator mounted via a taper to a thermoelectrically cooled CCD camera with an overall nominal pixel size of 15 microns at the plane of the specimen. Scanning a ruby sphere phantom independently assessed the resolution of the system. The full width at half-maximum of the plane spread function is nominally 53 microns in the axial direction and 60 microns in the transverse plane. The visualization of the x-ray data consists of several complimentary techniques, including a three-dimensional stack of the reconstructed tomogram slices with 30 micron reconstruction voxel, a 360 degree rotating view of the tooth comprised from a sequence of projection images processed for detail contrast enhancement and edge restoration, and a surface model of each tooth. In total, 237 human teeth representing multiple samples of each of the varied tooth types have been individually scanned, analyzed, and visualized to date. The set of tooth data is being compiled into a comprehensive human tooth atlas, which is to be made available on CD for students and investigators as a resource for anyone studying tooth morphology and mineralization.
Modeling and localization of web-based fusion image using VRML in clinical stroke case
Author(s):
Sang Ho Lee;
Sun Kook Yoo;
Yong Oock Kim M.D.;
Haijo Jung;
Sae Rome Kim;
Mijin Yun M.D.;
Jong Doo Lee M.D.;
Hee-Joung Kim
Show Abstract
Three dimensional (3D) modeling and visualization of the brain fusion images on the World Wide Web (WWW) is an effective way of sharing anatomic and functional information of the brain over the Internet, particularly for morphometry-based research and resident training in neuroradiology and neurosurgery. In this paper, 3D modeling, visualization, dynamic manipulation techniques, and the localization techniques for obtaining distance measurements of the inside and outside of the brain are integrated in an interactive and platform-independent manner and implemented over the WWW. The T1 weighted- and diffusion-weighted MR data of a stroke case which forms the subject of this study were digitally segmented, and used to visualize VRML-fused models in the form of polygonal surfaces based on the marching cube algorithm. Also, 2D cross sectional images were sequentially displayed for the purpose of 3D volume rendering, and user interface tools were embedded with ECMA script routines for the purpose of setting the appearance and transparency of the 3D objects. Finally, a 3D measurement tool was built in order to determine the spatial positions and sizes of the 3D objects.
Comparison between skin-mounted fiducials and bone-implanted fiducials for image-guided neurosurgery
Author(s):
Jennifer Rost;
Steven S. Harris;
James D. Stefansic;
Karl Sillay;
Robert L. Galloway Jr.
Show Abstract
Point-based registration for image-guided neurosurgery has become the industry standard. While the use of intrinsic
points is appealing because of its retrospective nature, affixing extrinsic objects to the head prior to scanning has
been demonstrated to provide much more accurate registrations. Points of reference between image space and
physical space are called fiducials. The extrinsic objects which generate those points are fiducial markers. The
markers can be broken down into two classifications: skin-mounted and bone-implanted. Each has distinct
advantages and disadvantages. Skin-mounted fiducials require simply sticking them on the patient in locations
suggested by the manufacturer, however, they can move with tractions placed on the skin, fall off and perhaps the
most dangerous problem, they can be replaced by the patient. Bone implanted markers being rigidly affixed to the
skull do not present such problems. However, a minor surgical intervention (analogous to dental work) must be
performed to implant the markers prior to surgery. Therefore marker type and use has become a decision point for
image-guided surgery. We have performed a series of experiments in an attempt to better quantify aspects of the two
types of markers so that better informed decisions can be made. We have created a phantom composed of a full-size
plastic skull [Wards Scientific Supply] with a 500 ml bag of saline placed in the brain cavity. The skull was then
sealed. A skin mimicking material, DragonSkinTM [SmoothOn Company] was painted onto the surface and allowed
to dry. Skin mounted fiducials [Medtronic-SNT] and bone-implanted markers [Z-Kat]were placed on the phantom.
In addition, three additional bone-implanted markers were placed (two on the base of the skull and one in the eye
socket for use as targets). The markers were imaged in CT and 4 MRI sequences (T1-weighted, T2 weighted, SPGR,
and a functional series.) The markers were also located in physical space using an Optotrak 3020 [Northern Digital
Inc]. Registrations between image space and physical space were performed and fiducial and target registration
errors were determined. Finally the 5 bone-implanted makers which penetrated the skin were removed and a traction
equivalent to 25% of the weight of the average human head was applied to the “skin” surface. Target and fiducial
registrations were again performed.
Three-dimensional stereo reconstruction of a mass of radioactive coils after embolization of cerebral aneurysms
Author(s):
Alain Gravel;
Jean Raymond;
Benoit Godbout;
Michel Daronat;
Philippe Leblanc;
Guy Cloutier;
Jacques de Guise
Show Abstract
Endovascular treatment of cerebral aneurysms with radioactive coils may prevent recanalization after embolization. This strategy requires an accurate estimation of the volume of the mass of coils to evaluate the intervention dosimetric success. The purpose of this work is to develop a computer-aided method to estimate the coil volumes using only two orthogonal angiographic projections. The originality of the method resides in the direct reconstruction of two 3-D contours of the mass of coils and the following variational interpolation of the 3-D surface in order to estimate its volume. Validated by simulations, the reconstruction algorithms could estimate the enclosed volumes with an average error of 2.9% and a variability of 2.5%. In addition, the feasibility of the method was also demonstrated using clinical images. Results showed that this reconstruction technique could quickly generate an accurate and realistic 3-D shape of the mass of coils without interfering with an ongoing clinician procedure.
A haptic-enhanced 3D real-time interactive needle insertion simulation for prostate brachytherapy
Author(s):
Xiaogang Wang;
Aaron Fenster
Show Abstract
A virtual reality based surgical simulation can improve the accuracy and quality of prostate brachytherapy by
facilitating surgeon training, rehearsal, and intra-operative assistance. In this paper, we describe a prototype 3D realtime
interactive simulation environment for needle insertion and seed implantation for prostate brachytherapy. A
restricted 3D ChainMail method, derived from the original 3D ChainMail method based on our modification, was used
to account for dynamic soft tissue deformation during needle insertion. We improved the neighbor-searching algorithm
for the original 3D ChainMail method to enable a complete search for any objects including strict-concave. A direct
manipulation model for needle-tissue interaction was implemented. A haptic feedback has also been provided to
enhance realism and training outcome. For simplicity and efficacy, we have adopted a distributed system structure
functionally incorporating two software modules: visual simulation module and haptic simulation module. The
simulation was demonstrated using four key steps of the brachytherapy procedure: 1) specification of seed positions
inside the prostate; 2) placement of a needle at a specified entry point and trajectory; 3) insertion of the needle into the
prostate consisting of two basic sub-steps: membrane contraction and penetration insertion; and, 4) retraction of the
needle after seed implantation. The preliminary results of the simulation are promising.
Automating prostate capsule contour estimation for 3D model reconstruction using shape and histological features
Author(s):
Rania Hussein;
Frederic McKenzie;
Ravindra Joshi
Show Abstract
Currently there are few parameters that are used to compare the efficiency of different form of cancerous prostate surgical removal. An accurate assessment of the percentage and depth of extra-capsular soft tissue removed with the prostate by the various surgical techniques can help surgeons determining the appropriateness of surgical approaches. In order to facilitate the reconstruction phase and thus provide a more accurate quantitation results when analyzing the images, it is essential to automatically identify the capsule line that separates the prostate capsule tissue from its extra-capsular one. However the prostate capsule is sometimes unrecognizable due to the naturally occurring intrusion of muscle into the prostate gland. At these regions where the capsule disappears, its contour can be arbitrarily reconstructed by drawing a continuing contour line based on the natural shape of the prostate gland. In this paper, some mathematical equations that will be used to provide a standard prostate shape at various stages will be presented. This mathematical model can be used in deciding the missing part of the capsule. It will also be used in conjunction with Generalized Hough Transform to automatically determine the capsule line, thus provides more accurate results in the reconstruction phase as well as in the percentage of coverage and depth calculations of the extra-capsular tissue.
An interactive system for volume segmentation in computer-assisted surgery
Author(s):
Tobias Kunert;
Tobias Heimann;
Andre Schroter;
Max Schobinger;
Thomas Bottger;
Matthias Thorn;
Ivo Wolf;
Uwe Engelmann;
Hans-Peter Meinzer
Show Abstract
Computer-assisted surgery aims at a decreased surgical risk and a reduced recovery time of patients. However, its use is still limited to complex cases because of the high effort. It is often caused by the extensive medical image analysis. Especially, image segmentation requires a lot of manual work. Surgeons and radiologists are suffering from usability problems of many workstations.
In this work, we present a dedicated workplace for interactive segmentation integratd within the CHILI (tele-)radiology system. The software comes with a lot of improvements with respect to its graphical user interface, the segmentation process and the segmentatin methods. We point out important software requirements and give insight into the concepts which were implemented. Further examples and applications illustrate the software system.
Extraction and analysis of coronary tree from single x-ray angiographies
Author(s):
Hildegard Koehler;
Michel Couprie;
Sahla Bouattour;
Dietrich Paulus
Show Abstract
Coronary vessel abnormalities can lead to insufficient blood circulation in the heart muscle. One way to control and detect distributions of this supply is the continuous observation of the vessel structure of the patient over a certain time.
In this paper we propose a reliable method for extracting the main vessels and most notably also fine ramifications in noisy angiographies with uneven background. We structured the extracted centerlines in a graph, obtaining thus information about the depth of branching-out and the number of visible vessels in the coronary-tree. These quantitative measurements serve as indicators to categorize the state of recovery of the patient and can be compared to earlier or later disease-stages. We evaluated our methods by comparing the results with hand-segmented images.
Error diffusion applied to the manipulation of liquid-crystal display subpixels
Author(s):
William J. Dallas;
Jiahua Fan;
Hans Roehrig;
Elizabeth A. Krupinski
Show Abstract
Flat-panel displays based on liquid crystal technology are becoming widely used in the medical imaging arena. Despite the impressive capabilities of presently-existing panels, some medical images push their boundaries. We are working with mammograms that contain up to 4800 x 6400 14-bit pixels. Stated differently, these images contain 30 mega-pixels each. In the standard environment, for film viewing, the mammograms are hung four-up, i.e. four images are located side by side.
Because many of the LCD panels used for monochrome display of medical images are based on color models, the pixels of the panels are divided into sub-pixels. These sub-pixels vary in their numbers and in the degrees of independence. Manufacturers have used both spatial and temporal modulation of these sub-pixels to improve the quality of images presented by the monitors.
In this presentation we show how the sub-pixel structure of some present and future displays can be used to attain higher spatial resolution than the full-pixel resolution specification would suggest while also providing increased contrast resolution. The error diffusion methods we discuss provide a natural way of controlling sub-pixels and implementing trade-offs. In smooth regions of the image contrast resolution can maximized. In rapidly-varying regions of the image spatial resolution can be favored.
Image data acquisition and segmentation for accurate modeling of the calvarium
Author(s):
Georg Eggers M.D.;
Sascha Dauber;
Werner Korb;
Thomas Welzel M.D.;
Ruediger Marmulla M.D.;
Stefan Hassfeld M.D.
Show Abstract
Accuracy of the patient-model is a critical point in robot assisted surgery. When performing craniotomies, the dura mater must not be perforated. Hence bone width is of particular interest. The influence of imaging and segmentation on accuracy of the width of the bone-model was investigated. A human cadaver head was scanned with a CT-scanner under a variety of image acquisition parameters. Bone was segmented from these image data sets using threshold based segmentation with different settings for the lower threshold. From these volume data sets surface models of the bone were generated. The real width of the bone of the skull was measured at several positions. Using fiducial marker registration, these measured values were compared to the corresponding positions in the bone-models. CT-scan imaging with a slice thickness and slice distance of 1.5 to 2mm and a segmentation of bone with a lower threshold of 300 or 400 Hounsfield Units resulted in models with an average accuracy of 0.4mm for bone-width. However, at some points these models were too thin by up to 0.9mm. More accurate models are needed. It has to be evaluated, whether CT imaging with higher resolution or more sophisticated segmentation algorithms can reduce the scatter.
Adaptive cubic interpolation of CT slices for maximum intensity projections
Author(s):
Junghyun Kwon;
Ji-Woong Yi;
Samuel M. Song
Show Abstract
Three-dimensional visualization of medical images, using maximum intensity projection (MIP), requires isotropic volume data for the generation of realistic and undistorted 3-D views. However, the distance between CT slices is usually larger than the pixel spacing within each slice. Therefore, before the MIP operation, these axial slice images must be interpolated for the preparation of the isotropic data set. Of many available interpolation techniques, linear interpolation is most popularly used for such slice interpolation due to its computational simplicity. However, as resulting MIP’s depend heavily upon the variance in interpolated slices (due to the inherent noise), MIP’s of linearly interpolated slices suffer from horizontal streaking artifacts when the projection direction is parallel to the axial slice (e.g., sagittal and coronal views). In this paper, we propose an adaptive cubic interpolation technique to minimize these horizontal streaking artifacts in MIP’s due to the variation of the variance across interpolated slices. The proposed technique, designed for near-constant variance distribution across interpolated slices, will be shown to be superior over the linear interpolation technique by completely eliminating the horizontal streaking artifacts in MIP’s of simulated data set and real CT data set.
Imaging and the new biology: What’s wrong with this picture?
Author(s):
Michael W. Vannier M.D.
Show Abstract
The Human Genome has been defined, giving us one part of the equation that stems from the central dogma of
molecular biology. Despite this awesome scientific achievement, the correspondence between genomics and imaging is
weak, since we cannot predict an organism's phenotype from even perfect knowledge of its genetic complement.
Biological knowledge comes in several forms, and the genome is perhaps the best known and most completely
understood type. Imaging creates another form of biological information, providing the ability to study morphology,
growth and development, metabolic processes, and diseases in vitro and in vivo at many levels of scale.
The principal challenge in biomedical imaging for the future lies in the need to reconcile the data provided by one or
multiple modalities with other forms of biological knowledge, most importantly the genome, proteome, physiome, and
other "-ome's."
To date, the imaging science community has not set a high priority on the unification of their results with genomics,
proteomics, and physiological functions in most published work. Images are relatively isolated from other forms of
biological data, impairing our ability to conceive and address many fundamental questions in research and clinical
practice.
This presentation will explain the challenge of biological knowledge integration in basic research and clinical
applications from the standpoint of imaging and image processing. The impediments to progress, isolation of the
imaging community, and mainstream of new and future biological science will be identified, so the critical and
immediate need for change can be highlighted.