Proceedings Volume 5029

Medical Imaging 2003: Visualization, Image-Guided Procedures, and Display

cover
Proceedings Volume 5029

Medical Imaging 2003: Visualization, Image-Guided Procedures, and Display

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 30 May 2003
Contents: 12 Sessions, 88 Papers, 0 Presentations
Conference: Medical Imaging 2003 2003
Volume Number: 5029

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Rendering
  • Modeling I
  • Cardiac
  • Intraprocedural Imaging I
  • Intraprocedural Imaging II
  • Rendering
  • Modeling and Hepatic Surgery
  • Tracking
  • Modeling II
  • Augment/Virtual Reality
  • Medical Displays
  • Simulation and Planning
  • Poster Session
Rendering
icon_mobile_dropdown
Rendering an archive in three dimensions
David Asher Leiman, Claire Twose, Teresa Y. H. Lee, et al.
We examine the requirements for a publicly accessible, online collection of three-dimensional biomedical image data, including those yielded by radiological processes such as MRI, ultrasound and others. Intended as a repository and distribution mechanism for such medical data, we created the National Online Volumetric Archive (NOVA) as a case study aimed at identifying the multiple issues involved in realizing a large-scale digital archive. In the paper we discuss such factors as the current legal and health information privacy policy affecting the collection of human medical images, retrieval and management of information and technical implementation. This project culminated in the launching of a website that includes downloadable datasets and a prototype data submission system.
Pointsbased reconstruction and rendering of 3D shapes from large volume dataset
In the field of medical imaging, researchers often need visualize lots of 3D datasets to get the informaiton contained in these datasets. But the huge data genreated by modern medical imaging device challenge the real time processing and rendering algorithms at all the time. Spurring by the great achievement of Points Based Rendering (PBR) in the fields of computer graphics to render very large meshes, we propose a new algorithm to use the points as basic primitive of surface reconstruction and rendering to interactively reconstruct and render very large volume dataset. By utilizing the special characteristics of medical image datasets, we obtain a fast and efficient points-based reconstruction and rendering algorithm in common PC. The experimental results show taht this algorithm is feasible and efficient.
Interactive visualization of very large medical datasets using point-based rendering
Christof Nuber, Ralph W. Bruckschen, Bernd Hamann, et al.
Visualizing high-resolution volumetric datasets is a challenging task for large volumetric datasets. With every new generation of scanners the available resolution is increasing, and state-of-the-art approaches can't be extended to handle these large amounts of data, either due to the nature of the algorithm or available hardware as the limiting factor. Current off-the-shelf graphics hardware allows interactive texture-based volume-rendering of volumetric datasets up to a resolution of 5123 datapoints only. We present a method which allows us to visualize even higher-resolution volumetric datasets. Our approach provides images similar to texture-based volume-rendering techniques at interactive frame-rates and full resolution. Our approach is based on an out-of-core point-based rendering approach. We first preprocess the data, grouping the points within the dataset according to their color on disc and read them when needed from disc to immediately stream them to the rendering hardware. The high resolution of the dataset and the density of the datapoints allows us to use a pure point-based rendering approach, the density of points with equal or similar values within the dataset can be considered as being high enough to display regions and contours using points only. With our approach we achieve interactive frame-rates for volumes exceeding 5123 pixels. The images generated are similar to those using volume-rendering approaches combined with sharp transfer-functions where only a limited number of values selected for display. With our data-stream-based approach interactivity is not restricted to navigation through the dataset itself, it also allows us to change the values of interest to be displayed in real-time, enabling us to change display-parameters and thus looking for interesting and important features and contours interactively. For a human brain extracted from a 753×1050×910 coloured dataset (courtesy of A. W. Toga, UCLA) we achieved frame-rates of 20 frames/second and more, depending on the values selected. We describe a new way to interactively display high-resolution datasets without any loss of detail. By using points instead of textured volumes we reduce the amount of data to be transferred to the graphics hardware when compared to hardware-supported texture-based volume rendering. Using a data-organization optimized for reading from disk, we reduce the number of disk-seeks, and thus the overall update-time for a change of parameter-values.
Interactive visual exploration of dynamic SPECT volume data using hybrid rendering
Manfred Hinz, Regina Pohle, Daniel Walz, et al.
Dynamic SPECT is a novel technique in nuclear medicine imaging. To find coherent structures within the dataset is the most important part of analyzing dSPECT data. Usually the observer focuses on a certain structure or an organ, which is to be identified and outlined. We use a user-guided method where a starting point is interactively selected whcih is also used to identify the object or structure. To find the starting point for segentation we search for the voxel having the maximum intensity in the dataset along the eye beam. In the situation where the data is segmented by region growing, we render both, the segmentation result and the original data in one view. The segmentation result is displayed as a wire mesh and fades over the voluem rendered original data. We use this hybrid rendering method in order to enable the user to validate the correctness of the sementation process. So it is possible to compare the two objects in one rendition.
Instant relighting of volumetric data
Tien-Tsin Wong, Chi-Wing Fu, Ping-Fu Fung, et al.
During the visualization of volume data, changing the illumination condition provides us a way to reveal and emphasize the local structures within the volume. However, volume rendering with real-time lighting control is hard. It requires the re-computation of the amount of light received at each voxel after the attenuation, whenever the user changes the lighting condition. In this paper, we describe an image-based approach to relight (change the illumination) the volume in real time. The nature of image-based rendering decouples the rendering time complexity from the resolution of volume data. Hence, real-time relighting of volumetric data is possible even shadow (attenuation) is taken into account. Instead of re-computing all the lighting information, we pre-render (sample) a set of images (reference images) of the volumetric data under different illumination conditions. With these reference images, we are able to relight the volume under desired lighting condition by interpolating and superimposing pixel values. The relighting can be performed on ordinary PCs.
Modeling I
icon_mobile_dropdown
Navigation aids and real-time deformation modeling for open liver surgery
This contribution presents a novel method for image-guided navigation in oncological liver surgery. It enables the perpetuation of the registration for deeply located intrahepatic structures during the resection. For this purpose, navigation aids localizable by an electro-magnetic tracking system are anchored within the liver. Position and orientation data gained from the navigation aids are used to parameterize a real-time deformation model. This approach enables for the first time the real-time monitoring of target structures also in the depth of the intraoperatively deformed liver. The dynamic behavior of the deformation model has been evaluated with a silicon phantom. First experiments have been carried out with pig livers ex vivo.
Prostate segmentation in 3D US images using the cardinal-spline-based discrete dynamic contour
Mingyue Ding, Congjin Chen, Yunqiu Wang, et al.
Our slice-based 3D prostate segmentation method comprises of three steps. 2) Boundary deformation. First, we chose more than three points on the boundary of the prostate along one direction and used a Cardinal-spline to interpolate an initial prostate boundary, which has been divided into vertices. At each vertex, the internal and external forces were calculated. These forces drived the evolving contour to the true boundary of the prostate. 3) 3D prostate segmentation. We propoaged the final contour in the initial slice to adjacent slices and refined them until all prostate boundaries of slices are segmented. Finally, we calculated the volume of the prostate from a 3D mesh surface of the prostate. Experiments with the 3D US images of six patient prostates demonstrated that our method efficiently avoided being trapped in local minima and the average percentage error was 4.8%. In 3D prostate segementation, the average percentage error in measuring the prostate volume is less than 5%, with respect to the manual planimetry.
Augmenting intraoperative MRI with preoperative fMRI and DTI by biomechanical simulation of brain deformation
Simon Keith Warfield, Florin Talos, Corey Kemper, et al.
The key challenge facing the neurosurgeon during neurosurgery is to be able to remove from the brain as much tumor tissue as possible while preserving healthy tissue and minimizing the disruption of critical anatomical structures. The purpose of this work was to demonstrate the use of biomechanical simulation of brain deformation to project preoperative fMRI and DTI data into the coordinate system of the patient brain deformed during neurosurgery. This projection enhances the visualization of relevant critical structures available to the neurosurgeon. Our approach to tracking brain changes during neurosurgery has been previously described. We applied this procedure to warp preoperative fMRI and DTI to match intraoperative MRI. We constructed visualizations of preoperative fMRI and DTI, and intraoperative MRI showing a close correspondence between the matched data. We have previously demonstrated our biomechanical simulation of brain deformation can be executed entirely during neurosurgery. We previously used a generic atlas as a substitute for patient specific data. Here we report the successful alignment of patient-specific DTI and fMRI preoperative data into the intraoperative configuration of the patient's brain. This can significantly enhance the information available to the neurosurgeon.
Inverse technique for combined model and sparse data estimates of brain motion
Karen E. Lunn, Keith D. Paulsen, David W. Roberts, et al.
Model-based approaches to correct for brain shift in image-guided neurosurgery systems have shown promising results. Despite the initial success of such methods, the complex mechanical behavior of the brain under surgical loads makes it likely that model predictions could be improved with the incorporation of real-time measurements of tissue shift in the OR. To this end, an inverse method has been developed using sparse data and model constraints to generate estimates of brain motion. Based on methodology from ocean circulation modeling, this computational scheme combines estimates of statistical error in forcing conditions with a least squares minimization of the model-data misfit to directly estimate the full displacement solution. The method is tested on a 2D simulation based on clinical data in which ultrasound images were co-registered to the preoperative MR stack. Calculations from the 2D forward model are used as the 'gold standard' to which the inverse scheme is compared. Initial results are promising, though further study is needed to ascertain its value in 3D shift estimates.
Laser range scanning for cortical surface characterization during neurosurgery
Tuhin K. Sinha, David Marshall Cash, Robert J. Weil, et al.
In this work, preliminary quantitative results are presented that characterize the cortical surface during neurosurgery using a laser range scanner. Intra-operative cortical surface data is collected from patients undergoing cortical resection procedures and is registered to patient-specific pre-operative data. After the skull bone-flap has been removed and the dura retracted, a laser range scanner (LRS) is used to capture range data of the brain's surface. An RGB bitmap is also captured at the time of scanning, which permits texturing of the range data. The textured range data is then registered to textured surfaces of the brain generated from pre-operative images. Registration is provided by a rigid-body transform that is based on iterative-closest point transforms and mutual information. Preliminary results using the LRS during surgery demonstrate a good visual alignment between intra-operative and pre-operative data. The registration algorithm is able to register surfaces using both sulcal and vessel patterns. Target registration errors on the order of 2mm have been achieved using the registration algorithm in a clinical setting. Results from the analysis of laser range scan data suggest that the unique feature-rich cortical surface may provide a robust method for intra-operative registration and deformation measurement. Using laser range scan data as a non-contact method of acquiring spatially relevant data in a clinical setting is a novel application of this technology. Furthermore, the work presented demonstrates a viable framework for current IGS systems to computationally account for brain shift.
Modeling interaction for image-guided procedures
Daniela Gorski Trevisan, Jean Vanderdonckt, Benoit M. M. Macq, et al.
Compared to conventional interfaces, image guided surgery (IGS) interfaces contain a richer variety and more complex objects and interaction types. The main interactive characteristics emering from systems like this is the interaction focus shared between physical space, where the surgeon interacts with the patient using surgical tools, and with the digital world, where the surgeon interacts with the system. This limitation results in two different interfaces likely inconsistent, thereby the interaction discontinuities do break the natuarl workflow forcing the user to switch between the operation modes. Our work addresses these features by focusing on the model, interaction and ergonomic integrity analysis considering the Augmented Reality paradigm applied to IGS procedures and more specifically applied to the Neurosurgery study case. We followed a methodology according to the model-based approach, including new extensions in order to support interaction technologies and to sensure continuity interaction according to the IGS system requirements. As a result, designers may as soon as possible discover errors in the development process and may perform an efficient interface design coherently integrating constraints favoring continuity instead of discrete interaction with possible inconsistencies.
Cardiac
icon_mobile_dropdown
Interactive volume rendering of multimodality 4D cardiac data with the use of consumer graphics hardware
Frank Enders, Magnus Strengert, Sabine Iserhardt-Bauer, et al.
Interactive multimodality 4D volume rendering of cardiac images is challenging due to several factors. Animated rendering of fused volumes with multiple lookup tables (LUT) and interactive adjustments of relative volume positions and orientations must be performed in real time. In addition it is difficult to visualize the myocardium separated from the surrounding tissue on some modalities, such as MRI. In this work we propose to use software techniques combined with hardware capabilities of modern consumer video cards for real-time visualization of time-varying multimodality fused cardiac volumes for diagnostic purposes.
Estimating the actual dose delivered by intravascular coronary brachytherapy using geometrically correct 3D modeling
Andreas Wahle, John J. Lopez, Edward C. Pennington, et al.
Intravascular brachytherapy has shown to reduce re-occurrence of in-stent restenosis in coronary arteries. For beta radiation, application time is determined from source activity and the angiographically estimated vessel diameter. Conventionally used dosing models assume a straight vessel with the catheter centered and a constant-diameter circular cross section. Aim of this study was to compare the actual dose delivered during in-vivo intravascular brachytherapy with the target range determined from the patient's prescribed dose. Furthermore, differences in dose distribution between a simplified tubular model (STM) and a geometrically correct 3-D model (GCM) obtained from fusion between biplane angiography and intravascular ultrasound were quantified. The tissue enclosed by the segmented lumen/plaque and media/adventitia borders was simulated using a structured finite-element mesh. The beta-radiation sources were modeled as 3-D objects in their angiographically determined locations. The accumulated dose was estimated using a fixed distance function based on the patient-specific radiation parameters. For visualization, the data was converted to VRML with the accumulated doses represented by color encoding. The statistical comparison between STM and GCM models in 8 patients showed that the STM significantly underestimates the dose delivered and its variability. The analysis revealed substantial deviations from the target dose range in curved vessels.
Visualizing electrocardiographic information on a patient specific model of the heart
Stijn De Buck, Frederik Maes, Wim Anne, et al.
The treatment of atrial tachycardia by radio-frequency ablation is a complex and minimally invasive procedure. In most cases the surgeon uses fluoroscopic imaging to guide catheters into the atria. After recording activation potentials from the electrodes on the catheter, which has to be done for different catheter positions, the physiologist has to fuse both the activation times derived from the potentials with the fluoroscopic images and extract from these a 3D anatomical model of the atrium. This model will provide him with the necessary information to locate the ablation regions. To alleviate the problem of mentally reconstructing these different sources of information, we propose a virtual environment that has the ability to visualize the electrodes information onto a patient specific model of the atria. This 3D atrium surface model is derived from pre-operatively taken MR-images. Within the system this model is visualized in 3 different ways: two views correspond to the 2 fluoroscopes images, which are shown registred in the background while the third one can be freely manipulated by the physiologist. The system allows to annotate measurements onto the 3D model. Since the heart is not a static organ, tools are provided to modify previous annotations interactively. The information contained in the measurements can than be dispersed across the heart after extrapolation and interpolation and subsequently visualized by color coding the surface model. Preliminary clinical evaluation on 30 patients indicates that the combined representation of the activation times and the heart model provides a thorough and more accurate insight into the possible causes and solutions to the tachycardia than would be obtained using solely the fluoroscopes images and mental reconstruction. Unlike other tachycardia visualization software, our approach starts with a patient specific surface model which in itself provides extra insight into the problem. Furthermore it can be used very interactively by the physiologist as a kind of 3D sketchbook where he can enter, delete, ... different measurements, tissue types. Finally, the system can visualize at any stage of the surgery a model containing all information at hand. In this paper we present a system to represent electrocardiographic information that allows the physiologist to mark measurements which can than be visualized on a patient specific atrium model by color coding. First clinical evaluation indicates that this approach offers a considerable amount of added value.
Piecewise registration for point-to-surface mapping of cardiac data
Patient-specific mapping of point based cardiac data to a segmented heart surface requires accurate point-to-surface registration. The hypothesis is that anatomical movement that occurs between electrophysiological (E-P) data and cardiac image acquisition causes the pulmonary veins to have different orientations relative to the heart. We propose a piecewise registration of the atria and veins to produce a more accurate matching of these data sets. We developed phantoms and simulated clinical data accounting for noise and motion to demonstrate the robustness of the point-to-surface registration algorithm. Then three sets of patient data were used to evaluate rigid and piecewise registration, totaling three left atria and eight pulmonary veins. Analysis using the Student’s t-test showed the overall average chamfer distance for the three patients was significantly lower with piecewise registration compared to global rigid registration (p-values = 0.01, 0.05, 0.10). Visual analysis of the global and piecewise registered points confirms the importance of considering the plasticity and locomotion generally inherent in dynamic biological systems when attempting to match data sets acquired from such systems.
New approach for quantitative coronary analysis (QCA) tool using multiresolution edge detection technique
Amjed Subhi Al-Fahoum, Jalal Zanoun
Variations of vessel's sizes, inter and intra-observer variability, nontrivial noise distribution, and the fuzzy representation of vessel's parameters are issues of concern for enhancing precision and accuracy of the available QCA techniques. In this paper, we present new multiresolution edge detection algorithm for determining vessel boundaries, and enhancing their centerline features. A bank of Canny filters of different resolutions is created. These filters are convolved with vascular images in order to obtain an edge image. Each filter will give maximum response to the segment of vessel having the same spatial resolution as the filter. The resulting responses across filters of different resolutions are combined to create an edge map for edge optimization. Boundaries of vessels are represented by edge-lines and are optimized on filter outputs with dynamic programming. The determined edge-lines are used to create vessel centerline. The centerline is then used to compute percent-diameter stenosis and coronary lesions. The system has been validated using synthetic images, flexible tube phantoms, and real angiograms. It has also been tested on coronary lesions with independent operators for inter-operator and intra-operator variability and reproducibility. The system has been found to be especially robust in complex images involving vessel branching and incomplete contrast filling.
Intraprocedural Imaging I
icon_mobile_dropdown
Fluoroscopy servoing using translation/rotation decoupling in an A/P view
Mihai L. Mocanu, Alexandru Patriciu, Dan S. Stoianovici, et al.
This paper presents a fluoroscopy servoing algorithm for automatic alignment of a needle using a medical robot during interventional procedures. The goal of this work is to provide physicians with assistance in needle alignment during minimally invasive procedures under fluoroscopy imaging. This may also help reduce radiation exposure for the physician and provide more accurate targeting of internal anatomy. The paper presents the overall concept and describes our implementation along with the initial laboratory results and studies in the interventional suite. The algorithm is based on a single anterior/posterior fluoroscopic image. Future work will be aimed at demonstrating the clinical feasibility of the method.
Three-dimensional guide wire visualization from 3DRA using monoplane fluoroscopic imaging
A new method has been developed that, based on tracking a guide wire in monoplane fluoroscopic images, visualizes the approximate guide wire position in the 3D vasculature, that is obtained prior to the intervention with 3D rotational X-ray angiography (3DRA). The method consists of four stages: (i) tracking of the guide wire in 2D fluoroscopic imaging, (ii) projecting the guide wire from the 2D fluoroscopic image back into the 3DRA image to determine possible locations of the guide wire in 3D, (iii) determining the approximate guide wire location in the 3DRA image based on image features, and (iv) visualization of the vessel and guide wire location found. The method has been evaluated using a 3DRA image of a vascular phantom filled with contrast, and monoplane fluoroscopic images of the same phantom without contrast and with a guide wire inserted. Evaluation has been performed for different projection angles. Also, several feature images for finding the optimal guide wire position have been compared. Average localization errors for the guide wire and the guide wire tip are in the range of a few millimetres, which shows that 3D visualization of the guide wire with respect to the vasculature as a navigation tool in endovascular procedures is feasible.
Investigation of megavoltage local tomography for detecting setup errors in radiation therapy
We investigate the problem of reconstructing a 3D image of a tumor volume from a set of truncated MV cone-beam projections. Our proposed approach is distinct from previously investigated approaches in that it utilizes a local tomography reconstruction algorithm. Using simulated and experimental MV projection data, we demonstrate that a local cone-beam tomography algorithm can reconstruct accurate images that contain information regarding boundaries and edges inside a localized region of interest. We also demonstrate that the conventional Feldkamp-Davis-Kress cone-beam reconstruction algorithm is not well-suited for reconstructing images of low-contrast structures from truncated cone-beam projections.
Using cortical vessels for patient registration during image-guided neurosurgery: a phantom study
Hai Sun, David W. Roberts, Alex Hartov, et al.
Patient registration, a key step in establishing image guidance, has to be performed in real-time after the patient is anesthetized in the operating room (OR) prior to surgery. We propose to use cortical vessels as landmarks for registering the preoperative images to the operating space. To accomplish this, we have attached a video camera to the optics of the operating microscope and acquired a pair of images by moving the scope. The stereo imaging system is calibrated to obtain both intrinsic and extrinsic camera parameters. During neurosurgery, right after opening of dura, a pair of stereo images is acquired. The 3-D locations of blood vessels are estimated via stereo vision techniques. The same series of vessels are localized in the preoperative image volume. From these 3-D coordinates, the transformation matrix between preoperative images and the operating space is estimated. Using a phantom, we have demonstrated that patient registration from cortical vessels is not only feasible but also more accurate than using conventional scalp-attached fiducials. The Fiducial Registration Error (FRE) has been reduced from 1 mm using implanted fiducials to 0.3 mm using cortical vessels. By replacing implanted fiducials with cortical features, we can automate the registration procedure and reduce invasiveness to the patient.
Registration algorithms for interventional MRI-guided treatment of the prostate
Baowei Fei, Kristin Frinkley, David L. Wilson
We are investigating interventional MRI (iMRI) guided radiofrequency (RF) thermal ablation for the minimally invasive treatment of prostate cancer. Nuclear medicine and MR spectroscopy can detect and localize tumor in the prostate not reliably seen in MR. We are investigating methods to combine the advantages of functional images such as SPECT with iMRI-guided treatments. Our concept is to first register the low-resolution functional images with a high resolution MRI. Then by registering the high-resolution MR volume with live-time iMRI acquisitions, we can, in turn, map the functional data and high-resolution anatomic information to iMRI images for improved tumor targeting. To achieve robust, accurate, and fast registration, we extensively compared different registration algorithms to align iMRI images with a high-resolution MR volume. Then by registering the high-resolution MR image with live-time iMRI acquisitions, we can, in turn, map the functional data and high-resolution anatomic information to iMRI images for improved tumor targeting. In this study, we registered noisy, thick iMRI image slices with high-resolution MR volumes and called this slice-to-volume registration. We investigated two similarity measures, i.e., mutual information and correlation coefficient, and three interpolation methods, i.e., tri-linear, re-normalized sinc, and nearest neighbor. To assess the quality of registration, we calculated 3D displacement on a voxel-by-voxel basis over a volume of interest between slice-to-volume registation and volume-to-volume registration that was previously shown to be quite accurate for these image pairs. Over 300 registration experiments showed that transverse slice images covering the prostate work best with a registration error of only 0.4 ± 0.2 mm. Error was greater at other slice orientations and positions. Since live-time iMRI images are used for guidance and registered images are used for adjunctive information, the accuracy and robustness of slice-to-volume registration is very probably adequate.
Three-dimensional correlation of MR images to muscle tissue response for interventional MRI thermal ablation
Michael S. Breen, Roee S. Lazebnik M.D., Jonathan S. Lewin, et al.
Solid tumors and other pathologies are being treated using radio-frequency (RF) ablation under interventional magnetic resonance imaging (iMRI) guidance. In animal experiments, we are investigating the ability of MR to monitor ablation treatments by comparing MR images of thermal lesions to histologically assayed cellular damage. We developed a new methodology using three-dimensional registration for making spatial correlations. A low-field, open MRI system was used to guide an ablation probe into the thigh muscle of 10 rabbits and acquire MR volumes post ablation. After the in vivo MR and histology images were aligned with a registration accuracy of 1.32 +/- 0.39 mm (mean ± SD), a boundary of necrosis identified in histology images was compared with manually segmented boundaries of the elliptical hyperintense region in MR images. For 14 MR images, we determined that the outer boundary of the hyperintense region in MR closely corresponds to the region of cell death, with a mean absolute distance between boundaries of 0.97 mm. Since this distance may be less than our ability to measure such differences, boundaries may match perfectly. This is good evidence that MR lesion images can localize the region of cell death during RF ablation treatments.
Intraprocedural Imaging II
icon_mobile_dropdown
Graphical user interface for intraoperative neuroimage updating
Kyle R. Rick, Alex Hartov, David W. Roberts, et al.
Image-guided neurosurgery typically relies on preoperative imaging information that is subject to errors resulting from brain shift and deformation in the OR. A graphical user interface (GUI) has been developed to facilitate the flow of data from OR to image volume in order to provide the neurosurgeon with updated views concurrent with surgery. Upon acquisition of registration data for patient position in the OR (using fiducial markers), the Matlab GUI displays ultrasound image overlays on patient specific, preoperative MR images. Registration matrices are also applied to patient-specific anatomical models used for image updating. After displaying the re-oriented brain model in OR coordinates and digitizing the edge of the craniotomy, gravitational sagging of the brain is simulated using the finite element method. Based on this model, interpolation to the resolution of the preoperative images is performed and re-displayed to the surgeon during the procedure. These steps were completed within reasonable time limits and the interface was relatively easy to use after a brief training period. The techniques described have been developed and used retrospectively prior to this study. Based on the work described here, these steps can now be accomplished in the operating room and provide near real-time feedback to the surgeon.
Generation of attributed relational vessel graphs from three-dimensional freehand ultrasound for intraoperative registration in image-guided liver surgery
We propose a procedure for the intraoperative generation of attributed relational vessel graphs. It builds the prerequisite for a vessel-based registration of a virtual, patient-individual, preoperative, three-dimensional liver model with the intraopeatively deformed liver by graph matching. An image processing pipeline is proposed to extract an abstract representation of the vascular anatomy from intraoperatively acquired three-dimensional ultrasound. The procedure is transferable to other vascularized soft tissues like the brain or the kidneys. We believe that our approach is suitable for intraoperative application as basis for efficient vessel-based registration of the surgical volume of interest. By reducing the problem of intraoperative registration in visceral surgery to the mapping of corresponding attributed relational vessel graphs a fast and reliable registration seems feasible even in the depth of deformed vascularized soft tissues like in human livers.
A method for the calibration of 3D ultrasound transducers
Mark Hastenteufel, Sibylle Mottl-Link M.D., Ivo Wolf, et al.
Background: Three-dimensional (3D) ultrasound has a great potential in medical diagnostics. However, there are also some limitations of 3D ultrasound, e.g., in some situations morphology cannot be imaged accurately due to acoustical shadows. Acquiring 3D datasets from multiple positions can overcome some of these limitations. Prior to that a calibration of the ultrasound probe is necessary. Most calibration methods descibed rely on two-dimensional data. We describe a calibration method that uses 3D data. Methods: We have developed a 3D calibration method based on single-point cross-wire calibration using registration techniques for automatic detection of cross centers. For the calibration a cross consisting of three orthogonal wires is imaged. A model-to-image registration method is used to determine the cross center. Results: Due to the use of 3D data less acquisitions and no special protocols are necessary. The influence of noise is reduced. By means of the registration method the time-consuming steps of image plane alignment and manual cross center determination becomes dispensable. Conclusion: A 3D calibration method for ultrasound transducers is described. The calibration method is the base to extend state-of-the-art 3D ultrasound devices, i.e., to acquire multiple 3D, either morphological or functional (Doppler), datasets.
Spatiotemporal visualization of the tongue surface using ultrasound and kriging
Analyzing the motion of the tongue surface provides valuable information about speech and swallowing. To analyze this motion, two-dimensional ultrasound images are acquired at video frame rates, and the tongue surface is automatically extracted and tracked. Further processing and statistical analysis of the extracted contours is made difficult by: 1) arbitrary spatial shifts and data loss resulting from ultrasound transducer positioning; 2) difference in tongue lengths over time for same utterance and across subjects; and 3) differences in the sampling locations. To address the above shortcombings, we used kriging to extrapolate and resample the tongue surface contours. Kriging was used becasue it does not lead to wild oscillations associated wiht traditional polynomial fitting. For our kriging implementation, we used the generalized covariance function and linear drift functions that are used in thin plate splines. Further, we designed a dedicated user interface called 'SURFACES' that exploits this extrapolation to visualize the contours as spatiotemporal surfaces. These spatiotemporal surfaces can be readily used for statistical comparison and visualization of tongue shapes for different utterances and swallows.
Real-time lens distortion correction using texture mapping
Optical lens systems suffer from non-linear radial distortion. Applications such as computer vision and medical imaging require distortion compensation for the accurate location, registration and measurement of image features. While in many applications distortion correction may be applied offline, a real-time capability is desirable for systems that interact with the environment or with a user in real time. The construction of a triangle mesh combined with distortion compensation of the mesh nodes results in a pair of static node co-ordinate sets, which a texture-mapping graphics accelerator can use along with the dynamic distorted image to render high-quality distortion-corrected images at video framerates. Mesh generation, an error analysis, and performance results are presented. The polar-based method proposed in this paper is shown to have both more accuracy than a conventional grid-based approach and greater speed than the traditional method of using the CPU to transform each pixel individually.
Rendering
icon_mobile_dropdown
Intraoperative neuroimage compensation using data-driven computational models
Loss of coregistration preoperative imaging studies and teh surgical field due to brain deformation during surgery is an important problem that has recently received considerable attention. Methods for compensating or correcting for the loss of congruence between image and surgical views are taking a variety of forms and involve a spectrum of data acquisition techniques and/or image processing schemes. This paper describes an emerging approach to intraoperative image compensation which combines pre- and intraoperative data acquisition with computational biomechanical modeling to estimate full volume deformation distributions that result from neurosurgical interventions. The strategy updates preoperative scans by projectin gthis displacement estimation from neurosurgical interventions. The strategy updates preoperative scans by projecting this displacement estimation onto the coregistered imaging study to deform it into a new image volume which reflects the geometrical chagnes in the surgical field which have occurred during surgery. Discussion and summary of developments associated with this idea which have appeared in the recent literature are presented.
Modeling and Hepatic Surgery
icon_mobile_dropdown
Quantification of skin motion over the liver using optical tracking
David Thomas Boyd, Jonathan Tang, Filip Banovac M.D., et al.
The purpose of this study was to quantify skin motion over the liver when patients are repositioned during image-guided interventions. Four human subjects with different body habitus lay supine on the interventional radiology table. The subjects held their arms up over their heads and down at their sides for 13 repositioning trials. Precise 3-D locations of the four skin fiducials permitted deformable skin motion to be quantified. For the first two occasions, the average skin motion was 1.00±0.82 mm in the arms-up position and 0.94±0.56 mm in the arms-down position, a small, but not statistically significant difference. Three out of the four subjects exhibited increased skin motion in the arms-up position, suggesting that patient-positioning technique during CT imaging may have an effect on the skin-motion component of registration error in image-guided interventions. The average skin motion was 0.65±0.39 mm for Subject 1 and 1.32±0.78 mm for Subject 2, a significant difference. Subjects 3 and 4 demonstrated a similar amount of skin motion. The subject with the largest body habitus demonstrated significantly less skin motion, an observation that is difficult to explain. The skin fiducial on the xiphoid process exhibited significantly less skin motion than the other fiducials, suggesting that certain anatomic locations could influence motion of the fiducial, and subsequently, the introduced error.
Incorporation of a laser range scanner into an image-guided surgical system
David Marshall Cash, Tuhin K. Sinha, William C. Chapman, et al.
Laser range scanners provide rapid and accurate non-contact methods for acquiring 3D surface data, offering many advntages over other techniques currently available during surgery. The range scanner was incorporated into our image-guided surgery system to augment registration and deformation compensation. A rigid body, embedded with IR diodes, was attached to the scanner for tracking in physical space with an optical localization system. The relationship between the scanner's coordinate system and the tracked rigid body was determined using a calibration phantom. Tracking of the scanner using the calibration phantom resulted in an error of 1.4±0.8 mm. Once tracked, data acquired intraoperatively from the range scanner data is registered with preoperative tomographic volumes using the Iterative Closest Point algorithm. Sensitivity studies were performed to ensure that this algorithm effectively reached a global minimum. In cases where tissue deformation is significant, rigid registrations can lead to inaccuracy during surgical navigation. Methods of non-rigid compensation may be necessary, and an initial study using a linearly elastic finite element model is presented. Differences between intraoperative and preoperative surfaces after rigid registration are used to formulate boundary conditions, and the resulting displacement field deforms the preoperative image volume. To test this protocol, a phantom was built, consisting of fiducial points and a silicone liver model. Range scan and CT data were captured both before and after deforming the organ. The pre-deformed images, after registration and modeling, were compared to post-deformation, although there is a noticeable improvement by implementing the finite element model. To improve accuracy, more elaborate surface registration and deformation compensation strategies will be investigated. To improve accuracy, more elaborate surface registration and deformation compensation strategies will be investigated. The ragne scanner is an innovative, uncumbersome, and relatively inexpensive method of collecting intraoperative data. It has been integrated into our image-guided surgical system and software with virtually no overhead.
Robotically assisted intraoperative ultrasound with application to ablative therapy of liver cancer
Emad M. Boctor, Russell H. Taylor, Gabor Fichtinger, et al.
Management of primary and metastatic tumors of the liver remains a significant challenge to the health care community worldwide. There has been an increasing interest in minimally invasive ablative approaches that typically require precise placement of the tissue ablator within the volumetric center of the tumor, in order to achieve adequate destruction. Standard clinical technique involves manual free hand ultrasonography (US) in conjunction with free hand positioning of the tissue ablator. Several investigational systems exist that simultaneously track a transcutaneous ultrasound (TCUS) probe and an ablator and provide visual overlay of the two on a computer screen, and some of those systems also register the TCUS images with pre-operative CT and/or MRI. Unfortunately, existing TCUS systems suffer from many limitations. TCUS fails to identify nearly half of all treatable liver lesions, whereas intraoperative or laparoscopic US provides excellent tissue differentiation. Furthermore, freehand manipulation of the US probe critically lacks the level of control, accuracy, and stability required for guiding liver ablation. Volumetric reconstruction from sparse and irregular 2D image data is suboptimal. Variable pressure from the sonographer's hand also causes anatomic deformation. Finally, maintaining optimal scanning position with respect to the target lesion is critical, but virtually impossible to achieve with freehand guidance. In response to these limitations, we propose the use of a fully encoded dexterous robotic arm to manipulate the US probe during surgery.
Tracking
icon_mobile_dropdown
Dynamic three-dimensional optical tracking of an ablative laser beam
Surgical resection remains the treatment of choice for brain tumors with infiltrating margins but is limited by visual discrimination between normal and neoplastic marginal tissues during surgery. Imaging modalities such as CT, MRI, PET, and optical biopsy techniques can accurately localize tumor margins. We believe coupling the fine resolution of current imaging techniques with the precise cutting of mid-infrared lasers through image-guided neurosurgery can greatly enhance tumor margin resection. This paper describes a feasibility study designed to optically track in three-dimensional space the articulated arm delivery of a non-contact ablative laser beam. Infrared-emitting diodes were attached to the handheld probe of an articulated arm to enable optical tracking of the laser beam focus in the operating room. Crosstalk between the infrared laser beam and the tracking diodes was measured. The geometry of the adapted laser probe was characterized for tracking a makeshift passive tip and laser beam focus. The target localization accuracies for both probe configurations were assessed. Stray laser light did not affect optical tracking accuracy. The mean target registration errors while optically tracking the laser probe with a passive tip and tracking the laser beam focus were 9.24 ± 5.14 and 3.16 ± 1.04 mm, respectively. Analysis of target localization errors indicated that precise optical tracking of a laser beam focus in three-dimensional space is feasible. However, since the projected beam focus is spatially defined relative to the tracking diodes, tracking accuracy is highly sensitive to laser beam delivery geometry and beam trajectory/alignment out of the articulated arm.
Navigation system for flexible endoscopes
Endoscopic Ultrasound (EUS) features flexible endoscopes equipped with a radial or linear array scanhead allowing high resolution examination of organs adjacent to the upper gastrointestinal tract. An optical system based on fibre-glass or a CCD-chip allows additional orientation. However, 3-dimensional orientation and correct identification of the various anatomical structures may be difficult. It therefore seems desirable to merge real-time US images with high resolution CT or MR images acquired prior to EUS to simplify navigation during the intervention. The additional information provided by CT or MR images might facilitate diagnosis of tumors and, ultimately, guided puncture of suspicious lesions. We built a grid with 15 plastic spheres and measured their positions relatively to five fiducial markers placed on the top of the grid. For this measurement we used an optical tracking system (OTS) (Polaris, NDI, Can). Two sensors of an electromagnetic tracking system (EMTS) (Aurora, NDI, Can) were mounted on a flexible endoscope (Pentax GG 38 UX, USA) to enable a free hand ultrasound calibration. To determine the position of the plastic spheres in the emitter coordinate system of the EMTS we applied a point-to-point registration (Horn) using the coordinates of the fiducial markers in both coordinate systems (OTS and EMTS). For the transformation between EMTS to the CT space the Horn algorithm was adopted again using the fiducial markers. Visualization was enabled by the use of the AVW-4.0 library (Biomedical Imaging Resource, Mayo Clinic, Rochester/MN, USA). To evaluate the suitability of our new navigation system we measured the Fiducial Registration Error (FRE) of the diverse registrations and the Target Registration Error (TRE) for the complete transformation from the US space to the CT space. The FRE for the ultrasound calibration amounted to 4.3 mm ± 4.2 mm, resulting from 10 calibration procedures. For the transformation from the OTS reference system to the EMTS emitter space we found an average FRE of 0.8 mm ± 0.2 mm. The FRE for the CT registration was 1.0 mm ± 0.3 mm. The TRE was found to be 3.8 mm ± 1.3 mm if we target the same spheres which where used for the calibration procedure. A movement of the phantom results in higher TREs because of the orientation sensitivity of the sensor. In that case the TRE in the area where the biopsy is supposed to be taken place was found to be 7.9 mm ± 3.2 mm. Our system provides the interventionist with additional information about position and orientation of the used flexible instrument. Additionally, it improves the marksmanship of biopsies. The use of the miniaturized EMTS enables for the first time the navigation of flexible instruments in this way. For the successful application of navigation systems in interventional radiology, an accuracy in the range of 5 mm is desirable. The accuracy of the localization of a point in CT space are just 3 mm too high as required. One of the possibilities to overcome this difference is to mount the two sensors in such a way that the interference of their electromagnetic fields is minimized. A considerable restraint constitutes the small characteristic volume (360mm x 600mm x 600mm), which requires for most application an additional optical system.
Nonlinear averaging method to optimize the accuracy of optical tracking
In image-guided surgery, it is essential to be able to track the positions of instruments in physical space and relate these positions to preoperative or intraoperative images. The most common way of tracking the instruments is using optical tracking; a camera containing two charge-coupled devices calculates the 3D position of IR light emitting diodes, and uses this information to deduce the position and orientation of the instrument. Because the IREDs cannot be localized perfectly, the calculated pose of the instrument, and hence the overlay of the instrument on the image, undergoes continual small changes even when the instrument is stationary. This leads to the user interface of the image guidance system displaying rapid small movements of the probe; we call this phenomenon 'jitter'. Severe jitter can make the image guidance system difficult and bothersome to use and may reduce system accuracy. In this paper, we examine a novel method of overcoming the jitter problem. This method performs a nonlinear average of historical tracking information collected over a given time period. We show the disadvantages of a simpler linear averaging technique; we also use a phantom of known geometry to examine the overall effect that averaging has on system accuracy.
Automatic detection, with confidence, of implanted radiographic seeds at megavoltage energies using an amorphous Silicon imager
Jonathan R. Sykes, Philip Whitehurst, Christopher J. Moore
The premise of image guided radiotherapy is the use of imaging to target the delivery of radiotherapy with high precision. Despite the high resolution of amorphous silicon flat panel imagers the detection of small implanted radiographic gold markers (length: 5mm, diameter: 0.8mm), visualised on portal images with low SNR and the inherent low contrast of mega-voltage photons, remains a significant, safety critical challenge. Convolution/correlation and sum of the squares of the difference (SSD) detection algorithms make use of marker templates to detect radiographic markers. However, direct convolution is not specific enough and SSD techniques fail in low SNR conditions. This report defines a robust SSD measure operating on a model template-to-clinical convolution image and a semi-empirical template self-convolution image, which is used to assign an objective measure of confidence to individual markers and unambiguously determine the separation of true and false detection distributions. The algorithm was tested on 9 clinical pelvic images produced by placing a template with 14 randomly arranged gold markers on patients during portal imaging. Using 95% confidence limits in a localised regional search for each of the 14 seeds, the number of correct detections averaged at 13, while the average number of false detections was less than 1.
A technique for validating image-guided gene therapy
Delivery of gene therapy by injection remains governed by a limited diffusion distance. We propose the use of image-guidance to increase the accuracy of delivery, allowing for multiple delivery locations within the tumor. An outcome based approach to validation was developed. We have developed a series of optically tracked devices including an optically tracked syringe used for gene therapy delivery. Experiments were designed to quantify the accuracy in recording known points (fiducial localization error) and delivering a substance to a target within a phantom. The second experiment required the design of a rigid structure with mounted fiducials capable of securely holding an apple. This apparatus was CT scanned and targets in the apple we recorded and inserted in the images. The tracked syringe was guided to the target and a small amount of barium was injected. The apparatus was then re-imaged and the distal points of injections were determined. The mounted fiducials allow the two image sets to be registered and the distance between the targets and injection points to be calculated. This experiment was also performed on a rat carcass. The apple possesses no intrinsic traits possible to help guided the syringe to a known location, thus the validation process remains blind to the user.
Modeling II
icon_mobile_dropdown
Human-visual-system-based fusion of multimodality 3D neuroimagery using brain-shift-compensating finite-element-based deformable models
Jacques G. Verly, Lara M. Vigneron, Nicolas Petitjean, et al.
Our goal is to fuse multimodality imagery to enhance image-guided neurosurgery. Images that need to be fused must be registered. Registration becomes a challenge when the imaged object deforms between the times the images to be fused are taken. This is the case when 'brain-shift' occurs. We begin by describing our strategy for nonrigid registration via finite-element methods. Then, we independently discuss an image fusion strategy based on a model of the human visual system. We illustrate the operation of many components of the registration system and the operation of the fusion system.
Intraoperative registration of the liver for image-guided surgery using laser range scanning and deformable models
The development of image-guided surgical systems (IGS) has had a significant impact on clinical neurosurgery and the desire to extend these principles to other surgical endeavors is the next step in IGS evolution. An impediment to its widespread adoption is the realization that the organ of interest often deforms due to common surgical loading conditions. As a result, alignment degradation between patient and the MR/CT image volume can occur which can compromise guidance fidelity. Recently, computational approaches to correct alignment have been proposed within neurosurgery. In this work, these approaches are extended for use within image-guided liver surgery and demonstrate this framework's adaptability. Results from the registration of the preoperative segmented liver surface and the intraoperative liver, as acquired by a laser range scanner, demonstrate accurate visual alignment in regions that deform minimally while in other regions misalignment due to deformations on the order of 1 cm are apparent. A model-updating strategy is employed which uses the closest point operator to compensate for deformations within the patient-specific image volume. The framework presented is an approach whereby laser range scanning coupled to a computational model of soft tissue deformation provide the necessary information to extend IGS principles to intra-abdominal explorative surgery applications.
3D parametric model of lesion geometry for evaluation of MR-guided radiofrequency ablation therapy
Radiofrequency current energy can be used to ablate pathologic tissue. Through magnetic resonance imaging (MRI), real-time guidance and control of the procedure is feasible. For many tissues, resulting lesions have a characteristic appearance with two boundaries enclosing an inner hypo-intense region and an outer hyper-intense margin, in both contrast enhanced T1 and T2 weighted MR images. We created a model having two quadric surfaces and twelve-parameters to describe both lesion surfaces. Parameter estimation was performed using iterative optimization such that the sum of the squared shortest distances from segmented points to the model surface was minimized. The method was applied to in vivo image volumes of lesions in a rabbit thigh model. For all in vivo lesions, the mean signed distance from the model surface to segmented boundaries, accounting for the interior or exterior location of points, was approximately zero with standard deviations less than a voxel width (0.7 mm). For all in vivo lesions, the median absolute distance from the model surface to data was <= 0.6 mm for both surfaces. We conclude our model provides a good approximation of actual lesion geometry and should prove useful for three-dimensional lesion visualization, volume estimation, automated segmentation, and volume registration.
Stereoscopic representation of the breast from two mammographic view with external markers
Maria Kallergi, Anand Manohar
A new breast imaging technique has been develoepd and tested for the stereoscopic representation of the breast. The method uses markers at specific locations on the breast surface and standard mammographic projections and was tested with an anthropomorphic phantom containing five mass-like objects at locations determined by a CT scan. The phantom was imaged with a GE Senographe 2000D digital system with and without the markers. The algorithm's modules included: 1) Breast area segmentation; 2) Pectoral muscle segmentation; 3) Registration and alignment of the mammographic projections based on selected reference points; 4) Breast volume estimation basdd on volume conservation principle during compression and shape definition using surface points; 5) 3D lesion(s) localization and representation. An interactive, ILD-based, graphical interface was also developed for the stereoscopic display of the breast. The reconstruction algorithm assumed that the breast shrinks and stretches uniformly when compression is applied and removed. The relative movement of the markers after compression allowed more accurate estimation of the shrinking and stretching of the surface offering a relatively simple and practical way to improve volume estimation and surface reconstruction. Such stereoscopic representation of the breast and associated findings may improve radiological interpretation and physical examinations for breast cancer diagnosis.
Image-guided breast biopsy using 3D ultrasound and stereotactic mammography
Kathleen J. M. Surry, Greg R. Mills, Kirk Bevan, et al.
A 3D ultrasound (US)-guided biopsy system was developed to supplement stereotactic mammography (SM) with near real-time 3D and real-time 2D US imaging. We have combined features from SM and US guided biopsy, including breast stabilisation, a confined needle trajectory and dual modality imaging. We have evaluated our procedure using breast phantoms, in terms of its accuracy with US-guided biopsy. Phantoms made of animal tissue with embedded phantom 'lesions' allowed us to test the biopsy accuracy of our procedure. We have also registered the SM image space to US image space, and both spaces to the mechanical geometry of the needle trajectory. Evaluation experiments have shown that our US-guided biopsy procedure was capable of placing the needle tip with 0.85 mm accuracy at a target identified in the 3D image. We also identified that we could successfully biopsy artificial lesions that were 3.2 mm in diameter, with a 96% success rate. As an adjunct to stereotactic mammography, we propose that this system could provide more complete information for target identification and real-time monitoring of needle insertion, as well as providing a means for rapid confirmation of biopsy success with 3D ultrasound.A 3D ultrasound (US)-guided biopsy system was developed to supplement stereotactic mammography (SM) with near real-time 3D and real-time 2D US imaging. We have combined features from SM and US guided biopsy, including breast stabilisation, a confined needle trajectory and dual modality imaging. We have evaluated our procedure using breast phantoms, in terms of its accuracy with US-guided biopsy. Phantoms made of animal tissue with embedded phantom 'lesions' allowed us to test the biopsy accuracy of our procedure. We have also registered the SM image space to US image space, and both spaces to the mechanical geometry of the needle trajectory. Evaluation experiments have shown that our US-guided biopsy procedure was capable of placing the needle tip with 0.85 mm accuracy at a target identified in the 3D image. We also identified that we could successfully biopsy artificial lesions that were 3.2 mm in diameter, with a 96% success rate. As an adjunct to stereotactic mammography, we propose that this system could provide more complete information for target identification and real-time monitoring of needle insertion, as well as providing a means for rapid confirmation of biopsy success with 3D ultrasound.
Augment/Virtual Reality
icon_mobile_dropdown
Augmented reality system for CT-guided interventions: system description and initial phantom trials
Frank Sauer, Uwe Joseph Schoepf, Ali Khamene, et al.
We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.
Computer-aided liver surgery planning: an augmented reality approach
Surgical resection of liver tumors requires a detailed three-dimensional understanding of a complex arrangement of vasculature, liver segments and tumors inside the liver. In most cases, surgeons need to develop this understanding by looking at sequences of axial images from modalities like X-ray computed tomography. A system for liver surgery planning is reported that enables physicians to visualize and refine segmented input liver data sets, as well as to simulate and evaluate different resections plans. The system supports surgeons in finding the optimal treatment strategy for each patient and eases the data preparation process. The use of augmented reality contributes to a user-friendly design and simplifies complex interaction with 3D objects. The main function blocks developed so far are: basic augmented reality environment, user interface, rendering, surface reconstruction from segmented volume data sets, surface manipulation and quantitative measurement toolkit. The flexible design allows to add functionality via plug-ins. First practical evaluation steps have shown a good acceptance. Evaluation of the system is ongoing and future feedback from surgeons will be collected and used for design refinements.
Intraoperative augmented reality for minimally invasive liver interventions
Michael Scheuering, Andrea Schenk, Armin Schneider, et al.
Minimally invasive liver interventions demand a lot of experience due to the limited access to the field of operation. In particular, the correct placement of the trocar and the navigation within the patient's body are hampered. In this work, we present an intraoperative augmented reality system (IARS) that directly projects preoperatively planned information and structures extracted from CT data, onto the real laparoscopic video images. Our system consists of a preoperative planning tool for liver surgery and an intraoperative real time visualization component. The planning software takes into account the individual anatomy of the intrahepatic vessels and determines the vascular territories. Methods for fast segmentation of the liver parenchyma, of the intrahepatic vessels and of liver lesions are provided. In addition, very efficient algorithms for skeletonization and vascular analysis allowing the approximation of patient-individual liver vascular territories are included. The intraoperative visualization is based on a standard graphics adapter for hardware accelerated high performance direct volume rendering. The preoperative CT data is rigidly registered to the patient position by the use of fiducials that are attached to the patient's body, and anatomical landmarks in combination with an electro-magnetic navigation system. Our system was evaluated in vivo during a minimally invasive intervention simulation in a swine under anesthesia.
New continuous level-of-detail algorithm and its application in virtual endoscopy
With the increasing of medical image datasets, the 3D model obtained by reconstruction often incorporates millions of triangles that make real time rendering very difficult. Progressive Mesh (PM) had been developed to address the above problem of view-dependent level-of-detail control, but its speed can’t meet the requirement of virtual endoscopy. In this study, we developed a new view-dependent continuous level-of-detail (CLOD) algorithm for triangle meshes with subdivision connectivity. First, the mesh was simplified in hierarchy to get the simplest mesh (called as base domain), then each hierarchy of the simplified mesh was parameterized to map to the base domain, and finally the view-dependent subdivision was used to resample the mesh to get a multi-resolution model. We constructed an index to record the changes of view parameters by the adaptive octree so as to make full use of the reusability of the adjacent frame and reduce the dynamic changes of the selected levels of detail. We tested our algorithm in several different datasets. The experiments showed that our method is efficient and easy to implement, and the model can be rendered in real time to meet the requirement of virtual endoscopy.
Implicit functions: applications in medicine and biology
Jean-Marie Bouteiller, Benjamin Wu, Michel Baudry
3D reconstructions constitute a valuable tool in medicine and biology. They allow researchers to further analyze, compare and register anatomical data. In medical cases, they allow a better understanding of the condition to treat and may increase the changes of an accurate diagnostic, surgery and treatment. In these fields, the initial dataset mostly consists of parallel sections. The accuracy of this dataset usually suffers from technological limitations of the acquisition system. These limitations can result in a possible low accuracy in the plane of sections, but can also consist of a limitation in the number of sections. This induces undersampling which constitutes a major problem for the subsequent 3D reconstruction. Many reconstruction methods exist. This paper provides a qualitative and quantitative comparison of several of these methods. We present the perspectives introduced by the utilization of implicit functions in terms of accuracy but also in their application to higher dimensional problems. The main dataset consists of neuroanatomical data, but other examples are also provided. Emphasis is made on undersampled initial data and the reconstruction of structures with complex geometry.
Medical Displays
icon_mobile_dropdown
Accurate measurement of monochrome luminance palettes for the calibration of medical LCD monitors
Grayscale medical monitors are commonly calibrated by transforming the image display values sent to a graphic controller using a lookup table (LUT). The calibration LUT is deduced from the uncalibrated luminance response (uLR) of the display system. The uLR of liquid crystal display (LCD) systems is poorly behaved with significant discontinuities occurring in the relative luminance changes. Accurate grayscale calibration of LCD devices thus requires a measurement of the luminance for the full palette of possible output values. A method is reported to acquire the uLR of LCD displays, generate a LUT to achieve precise calibration, and assess the accuracy of the calibration results. A palette of 766 luminance values can be measured in 12 minutes. The accuracy of the method permits the evaluation of relative luminance changes, dL/L, to be made with a precision of .0002 to .0007 for luminance values between 1000 and 1 cd/m2. For seven LCD monitors, 766 values for the uLR were measured and calibration tables deduced. The calibrated luminance response (cLR) for 256 gray values was then compared to the DICOM standard. The root mean squared error of the observed JNDs per luminance interval values ranged from .37 to .59 which is less than the AAPM recommended value of 1.0. A full calibration of this type should be done at installation. However, the stability of LCD systems suggests that periodic recalibration will not be necessary.
Characterization of liquid-crystal displays for medical images: II
Hartwig R. Blume, Peter M. Steven, Anne Marie K. Ho, et al.
The paper presents methodologies for characterizing liquid crystal displays (LCDs) and the image quality of two new high-performance monochrome LCDs, a 2- and a 5-million-pixel display. The systems' image quality is described by on-axis characteristic curves, luminance range and contrast, luminance and contrast as a function of viewing angle, diffuse and specular reflection coefficients, color coordinates, luminance uniformity across the display screen, temporal response time and temporal modulation transfer function (MTF), spatial MTF, spatial noise power spectra and signal-to-noise ratios. The LCDs are equipped with an internal photosensor that maintains a desired maximum luminance and calibration to a given display function. The systems offer aperture and temporal modulation to place luminance levels with more than 12-bit precision on a desired display function and achieve very uniform contrast distribution over the luminance range. The LCDs have image quality that is superior in many respects to high-performance and high-resolution cathode-ray-tube (CRT) displays, except for the temporal MTF and the spatial noise. Spatial noise appears to be comparable to CRT display systems with P4 or P104 phosphor screens.
Effect of viewing angle on visual detection in liquid crystal displays
Display devices for medical diagnostic workstations should have a diffuse emission with apparent luminance independent of viewing angle. Such displays are called Lambertian, or they obey Lambert's law. Actual display devices are never truly Lambertian; the luminance of a pixel depends on the viewing angle. In active-matrix liquid crystal displays (AMLCD), the departure from the Lambertian profile depends on the gray level and complex pixel designs having multiple domains, in-plain switching or vertically-aligned technology. Our previous measurements established that the largest deviation from the desired Lambertian distribution occurs in the low luminance range for the diagonal viewing direction. Our purpose in this work is to determine the effect that non-uniform changes of the angular emission have on the detection of low-contrast signals in noisy backgrounds. We used a sequential two-alternative forced choice (2AFC) approach with test images displayed at the center of the screen. The observer location was fixed at different viewing angles: on-axis and off-axis. The results are expressed in terms of percent-correct for each observer and for each experimental condition (viewing angle and luminance). Our results show that for the test images used in this experiment with human observers, the changes in detectability between on-axis and off-axis viewing are smaller than the observer variability. Model observers are consistent with these results but also indicate that different background and signal levels can lead to meaningful performance differences between on-axis and off-axis viewing.
Clinical verification of TG18 methodology for display quality evaluation
The American Association of Physicists in Medicine Task Group 18 (TG18) has recently developed guidelines for objective performance evaluation of medical displays. This paper reports on the first multi-institutional trial focusing on the implementation and clinical verification of the TG18 methodology for performance testing of medical image display devices in use at different clinical centers. A minimum of two newly-installed PACS display devices were tested at each institution. The devices represented a broad spectrum of makes and models of 1-5 megapixel CRT and LCD display devices. They were all either new or in clinical use for primary diagnosis with acceptable performance at the time of testing. The TG18 test patterns were loaded on all the systems. Visual and quantitative tests were performed according to the guidelines for assessing specific display quality characteristics including geometrical distortion, reflection, luminance response, luminance uniformity, resolution, noise, veiling glare, color uniformity, and display artifacts. The results were collected in a common database. For each test, the results and their variability were compared to the recommended acceptance criteria. The findings indicated that TG18 tests and guidelines can easily be implemented in clinical settings. Most recommended criteria were deemed appropriate, while small minor modifications were suggested.
Simulation and Planning
icon_mobile_dropdown
Semi-automated evaluation of high-resolution MRI for preoperative cochlear implant screening
Mambidzeni Madzivire, Jon J. Camp, John Lane M.D., et al.
The success of cochlear implants is contingent on a functioning auditory nerve. An accurate non-invasive method of screening cochlear implant candidates for quantitative measurement of auditory nerve viability would allow physicians to better determine the likelihood for success of the procedure. Previous studies have indicated a relationship between auditory nerve diameters and their functionality. In order to investigate this finding, we made morphological measurements of the auditory and facial nerves and correlated these measurements with audiologic test results. In addition, we developed a technique to segment a portion of the nerves with minimal user interaction. The study included 11 cochlear implant candidates. Non-invasive high-resolution bilateral MR images were acquired from 3T and 1.5T scanners using either CISS or Fast Spin Echo sequences. The images were processed with an anisotropic diffusion filter to enhance the edges of the nerves. Segmentation involved morphological processing of the original filtered image to produce a related binary image, which was then subtracted from the original image at a suitable threshold to isolate the auditory and facial nerves from other structures in the internal auditory canal. The volumes of the auditory nerve and the facial nerve were computed from the five best continuously segmented saggital slices, and the corresponding ratios were then determined. Preliminary analysis of the segmentation process suggests that this method is most effective on images acquired using the CISS sequence. Correlation of the measurements of the subjects to the findings of collaborating audiologists was carried out. Preliminary results suggest there is a threshold of ratios below which a value is indicative of a degenerated nerve, and consequently a higher risk for unsuccessful cochlear implant. A semi-automated segmentation technique was developed which allows effective segmentation of multiple slices of MRI data. Since the segmentation and measurement processes require little user interaction, the results are significantly reproducible. In addition to this, volume measurements increase accuracy. This technique requires less than ten minutes for completion of one case by an experienced operator. This is a promising technique that should allow accurate, reproducible and rapid segmentation of the auditory and facial nerves for volume measurements in assessment of nerve viability. The information provided by these measurements can assist physicians in determination before the procedure of the efficacy of a cochlear implant.
LORENZ: a system for planning long-bone fracture reduction
Wolfgang Birkfellner, Wolfgang Burgstaller, Joachim Wirth, et al.
Long bone fractures belong to the most common injuries encountered in clinical routine trauma surgery. Preoperative assessment and decision making is usually based on standard 2D radiographs of the injured limb. Taking into account that a 3D - imaging modality such as computed tomography (CT) is not used for diagnosis in clinical routine, we have designed LORENZ, a fracture reduction planning tool based on such standard radiographs. Taking into account the considerable success of so-called image free navigation systems for total knee replacement in orthopaedic surgery, we assume that a similar tool for long bone fracture reposition should have considerable impact on computer-aided trauma surgery in a standard clinical routine setup. The case for long bone fracture reduction is, however, somewhat more complicated since not only scale independent angles indicating biomechanical measures such as varus and valgus are involved. Reduction path planning requires that the individual anatomy and the classification of the fracture is taken into account. In this paper, we present the basic ideas of this planning tool, it's current state, and the methodology chosen. LORENZ takes one or more conventional radiographs of the broken limb as input data. In addition, one or more x-rays of the opposite healthy bone are taken and mirrored if necessary. A most adequate CT model is being selected from a database; currently, this is achieved by using a scale space approach on the digitized x-ray images and comparing standard perspective renderings to these x-rays. After finding a CT-volume with a similar bone, a triangulated surface model is generated, and the surgeon can break the bone and arrange the fragments in 3D according to the x-ray images of the broken bone. Common osteosynthesis plates and implants can be loaded from CAD-datasets and are visualized as well. In addition, LORENZ renders virtual x-ray views of the fracture reduction process. The hybrid surface/voxel rendering engine of LORENZ also features full collision detection of fragments and implants by using the RAPID collision detection library. The reduction path is saved, and a TCP/IP interface to a robot for executing the reduction was added. LORENZ is platform independent and was programmed using Qt, AVW and OpenGL. We present a prototype for computer-aided fracture reduction planning based on standard radiographs. First test on clinical CT-Xray image pairs showed good performance; a current effort focuses on improving the speed of model retrieval by using orthonormal image moment decomposition, and on clinical evaluation for both training and surgical planning purposes. Furthermore, user-interface aspects are currently under evaluation and will be discussed.
Digital dissection system for medical school anatomy training
Kurt E. Augustine, Wojciech Pawlina, Stephen W. Carmichael, et al.
As technology advances, new and innovative ways of viewing and visualizing the human body are developed. Medicine has benefited greatly from imaging modalities that provide ways for us to visualize anatomy that cannot be seen without invasive procedures. As long as medical procedures include invasive operations, students of anatomy will benefit from the cadaveric dissection experience. Teaching proper technique for dissection of human cadavers is a challenging task for anatomy educators. Traditional methods, which have not changed significantly for centuries, include the use of textbooks and pictures to show students what a particular dissection specimen should look like. The ability to properly carry out such highly visual and interactive procedures is significantly constrained by these methods. The student receives a single view and has no idea how the procedure was carried out. The Department of Anatomy at Mayo Medical School recently built a new, state-of-the-art teaching laboratory, including data ports and power sources above each dissection table. This feature allows students to access the Mayo intranet from a computer mounted on each table. The vision of the Department of Anatomy is to replace all paper-based resources in the laboratory (dissection manuals, anatomic atlases, etc.) with a more dynamic medium that will direct students in dissection and in learning human anatomy. Part of that vision includes the use of interactive 3-D visualization technology. The Biomedical Imaging Resource (BIR) at Mayo Clinic has developed, in collaboration with the Department of Anatomy, a system for the control and capture of high resolution digital photographic sequences which can be used to create 3-D interactive visualizations of specimen dissections. The primary components of the system include a Kodak DC290 digital camera, a motorized controller rig from Kaidan, a PC, and custom software to synchronize and control the components. For each dissection procedure, the images are captured automatically, and then processed to generate a Quicktime VR sequence, which permits users to view an object from multiple angles by rotating it on the screen. This provides 3-D visualizations of anatomy for students without the need for special '3-D glasses' that would be impractical to use in a laboratory setting. In addition, a digital video camera may be mounted on the rig for capturing video recordings of selected dissection procedures being carried out by expert anatomists for playback by the students. Anatomists from the Department of Anatomy at Mayo have captured several sets of dissection sequences and processed them into Quicktime VR sequences. The students are able to look at these specimens from multiple angles using this VR technology. In addition, the student may zoom in to obtain high-resolution close-up views of the specimen. They may interactively view the specimen at varying stages of dissection, providing a way to quickly and intuitively navigate through the layers of tissue. Electronic media has begun to impact all areas of education, but a 3-D interactive visualization of specimen dissections in the laboratory environment is a unique and powerful means of teaching anatomy. When fully implemented, anatomy education will be enhanced significantly by comparison to traditional methods.
Virtual tomography: a new approach to efficient human-computer interaction for medical imaging
Michael Teistler, Oliver J. Bott, Jochen Dormeier, et al.
By utilizing virtual reality (VR) technologies the computer system virtusMED implements the concept of virtual tomography for exploring medical volumetric image data. Photographic data from a virtual patient as well as CT or MRI data from real patients are visualized within a virtual scene. The view of this scene is determined either by a conventional computer mouse, a head-mounted display or a freely movable flat panel. A virtual examination probe is used to generate oblique tomographic images which are computed from the given volume data. In addition, virtual models can be integrated into the scene such as anatomical models of bones and inner organs. virtusMED has shown to be a valuable tool to learn human anaotomy and to udnerstand the principles of medical imaging such as sonography. Furthermore its utilization to improve CT and MRI based diagnosis is very promising. Compared to VR systems of the past, the standard PC-based system virtusMED is a cost-efficient and easily maintained solution providing a highly intuitive time-saving user interface for medical imaging.
Realistic prediction of individual facial emotion expressions for craniofacial surgery simulations
Evgeny Gladilin, Stefan Zachow, Peter Deuflhard, et al.
In addition to the static soft tissue prediction, the estimation of individual facial emotion expressions is an important criterion for the evaluation of the carniofacial surgery planning. In this paper, we present an approach for the estimation of individual facial emotion expressions on the basis of geometrical models of human anatomy derived from tomographic data and the finite element modeling of facial tissue biomechanics.
Volumetric treatment planning and image guidance for radiofrequency ablation of hepatic tumors
Kevin Robert Cleary, Daigo Tanaka, David Stewart, et al.
This paper describes a computer program for volumetric treatment planning and image guidance during radiofrequency (RF) ablation of hepatic tumors. The procedure is performed by inserting an RF probe into the tumor under image guidance and generating heat to 'cook' a spherical region. If the tumor is too large to be ablated in a single burn, then multiple overlapping spherical burns are needed to encompass the entire target area. The computer program is designed to assist the physician in planning the sphere placement, as well as provide guidance in placing the probe using a magnetic tracking device. A pre-operative CT scan is routinely obtained before the procedure. On a slice by slice basis, the tumor, along with a 1 cm margin, is traced by the physician using the computer mouse. Once all of the images are traced, the program provides a three-dimensional rendering of the tumor. The minimum number of spheres necessary to cover the target lesion and the 1 cm margin are then computed by the program and displayed on the screen.
Poster Session
icon_mobile_dropdown
Prostate segmentation in ultrasound images with deformable shape priors
Automated prostate segmentation in ultrasound images is a challenging task due to speckle noise, missing edge segments, and complex prostate peripheral anatomy. In this paper, a Bayesian prostate segmentation algorithm is presented. It combines both prior shape and image information for robust segmentation. In this study, the prostate shape was efficiently modeled using deformable superellipse. A flexible graphical user interface has been developed to facilitate the validation of our algorithm in a clinical setting. This algorithm was applied to 66 ultrasound images collected from 8 patients. The resulting mean error between the computer-generated boundaries and the manually-outlined boundaries was 1.39 ± 0.60 mm, which is significantly less than the variability between human experts.
Cardiac modeling using the parameterized super-quadric and the contractility
Representation methods for cardiac motility were developed in this study. We estimated some parameters which have cardiac feature to model with an innovative scheme. The parameterized super quadric model to visualize the motion of a left ventricle was implemented with OpenGL and Visual C++. Myocardial wall thickening was displayed with super-ellipsoidal model. The measured count for thickening was changed as time frames in this model. And motility was parameterized additionally in the parameterized super quadric model. We made an experiment on analyzing the motility of left ventricle myocardium. The criterion was tested in the validation study in 7 normal subjects and 26 patients with prior myocardial infarction. In order to analyze the motility, we used mean and variance of the total motion during cardiac cycle. The average of normal subject has 0.46 and variance has 0.02. In the case of patients, the average and variance of motility has 0.59 and 0.08 respectively. Although the average value didn’t have the difference between normal and abnormal, the variance had them. In general, patients were 0.08 and normal subjects were 0.02 in variance. The difference between normal subjects and abnormal subjects was estimated. In abnormal subject, the motility was 128% higher than normal subject. The variance was also 328% high. In the patient study, the quantity of motion is decreased rapidly in stressed states. In the visualization for contractility, fifteen segment variables were displayed. The locations of all point could be rotation with mouse interface. The most of factors were visualized for cardiac motility and cardiac features. We expect that this model distinguishes between normal subjects and abnormal subjects. And an exact analysis of momentum utilizing this model could be evaluated.
Analysis of model-updated MR images to correct for brain deformation due to tissue retraction
Bradley K. Lamprich, Michael I. Miga
Surgical events such as retraction, resection, and gravitational sag often cause significant tissue movement that compromises the accuracy of neuronavigation systems that use a preoperative image display. Computational modeling has gained interest as a method for correcting registration errors that result from brain deformation by simulating surgical events and creating updated images. The success of simulating surgical events relies upon the application of surgical forces to a model of brain deformation physics. This paper analyzes the model simulation of retraction using a finite element model of the brain. To test the model, we conducted an ex vivo experiment on a porcine model using a retraction system in a MR scanner. The high-resolution images of retraction obtained from the sets of MR images were used to create the 3D volumetric model and serve as a basis of comparison to the model-updated images and calculations. The model is found to recapture 66% of average tissue motion and reduce the maximum registration error by over 80%. The model-updated images are displayed along with the actual deformation images and show a strong potential for computational modeling as a means to compensate for brain shift and minimize registration errors.
Semi-automatic procedure to extract Couinaud liver segments from multislice CT data
Liver resection and transplantation surgeries require careful planning and accurate knowledge of the vascular and gross anatomy of the liver. This study aims to create a semi-automatic method for segmenting the liver, along with its entire venous vessel tree from multi-detector computed tomograms. Using fast marching and region-growth techniques along with morphological operations, we have developed a software package which can isolate the liver and the hepatic venous network from a user-selected seed point. The user is then presented with volumetric analysis of the liver and a 3-Dimensional surface rendering. Software tools allow the user to then analyze the lobes of the liver based upon venous anatomy, as defined by Couinaud. The software package also has utilities for data management, key image specification, commenting, and reporting. Seven patients were scanned with contrast on the Mx8000 CT scanner (Philips Medical Systems), the data was analyzed using our method and compared with results found using a manual method. The results show that the semi-automated method utilizes less time than manual methods, with results that are consistent and similar. Also, display of the venous network along with the entire liver in three dimensions is a unique feature of this software.
Combined approach of shell and shear-warp rendering for efficient volume visualization
In Medical Imaging, shell rendering (SR) and shear-warp rendering (SWR) are two ultra-fast and effective methods for volume visualization. We have previously shown that, typically, SWR can be on the average 1.38 times faster than SR, but it requires from 2 to 8 times more memory space than SR. In this paper, we propose an extension of the compact shell data structure utilized in SR to allow shear-warp factorization of the viewing matrix in order to obtain speed up gains for SR, without paying the high storage price of SWR. The new approach is called shear-warp shell rendering (SWSR). The paper describes the methods, points out their major differences in the computational aspects, and presents a comparative analysis of them in terms of speed, storage, and image quality. The experiments involve hard and fuzzy boundaries of 10 different objects of various sizes, shapes, and topologies, rendered on a 1GHz Pentium-III PC with 512MB RAM, utilizing surface and volume rendering strategies. The results indicate that SWSR offers the best speed and storage characteristics compromise among these methods. We also show that SWSR improves the rendition quality over SR, and provides renditions similar to those produced by SWR.
Tool for automatic real-time regional cardiac function analysis using HARP
Klaled Z. Abd-Elmoniem, Smita Sampath, Nael F. Osman, et al.
The FastHARP magnetic resonance pulse sequence can acquire taggged cardiac images at a rate of 45 ms per frame, enabling 7-20 harmonic phase (HARP) images per heartbeat per tag orientation. By switching the tag orientation every heartbeat, data from just two heartbeats can be used to compute in-plane quantities describing myocardial deformation, such as circumferential and radial strain. Standard HARP software, however, requries about one second to compute each strain image, which is not fast enough to keep up with the FastHARP pulse sequence. In this work, we have developed real-time algorithms for HARP processing of tagged MR images. The code was implemented along wiht a visualization tool that runs in conjunction with the FastHARP pulse sequence. HARP strain computations and display can now be carried out in real-time after a one heartbeat delay. The software is also fast enough to track and plot the time profile of strain of one or more points in the myocardium in real-time. Our software has now been integrated into a research testbed for magnetic resonance cardiac stress testing, contributing to the emerging suite of clinical cardiac MRI protocols.
Three-dimensional head anthropometric analysis
Reyes Enciso, Alex M. Shaw, Ulrich Neumann, et al.
Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).
Opacity modulation for interactive volume rendering of medical images
Interactivity is a main requirement for 3D visualization of medical images in a variety of clinical applications. The good matching between segmentation and rendering techniques allows to design easy to use interactive systems which assist the physicians in dynamically creating and manipulating 'diagnostically relevant' images from volumetric data sets. In this work we consider the above problem within an original interactive visualization paradigm. By this paradigm we want to highlight the twofold clinical requirement of a) detecting and visualizing structures of diagnostic interest (SoDI's) and b) adding to the 3D scene some other structures to create a meaningful visual context. Being the opacity modulation of the different structures a crucial point, we propose an opacity management which reflects the paradigm ideas and operates by means of a twofold indexed look-up table (2iLUT). The 2iLUT consists of a combination of attribute based and object based opacity management and is here designed and tested in order to combine the time interaction benefits of an indexed opacity setting with the effective handling of the above classification and visualization clinical requirements.
Volume estimation of cerebral aneurysms from biplane DSA: a comparison with measurements on 3D rotational angiography data
Javier Olivan Bescos, Marian Slob, Menno Sluzewski, et al.
A cerebral aneurysm is a persistent localized dilatation of the wall of a cerebral vessel. One of the techniques applied to treat cerebral aneurysms is the Guglielmi detachable coil (GDC) embolization. The goal of this technique is to embolize the aneurysm with a mesh of platinum coils to reduce the risk of aneurysm rupture. However, due to the blood pressure it is possible that the platinum wire is deformed. In this case, re-embolization of the aneurysm is necessary. The aim of this project is to develop a computer program to estimate the volume of cerebral aneurysms from archived laser hard copies of biplane digital subtraction angiography (DSA) images. Our goal is to determine the influence of the packing percentage, i.e., the ratio between the volume of the aneurysm and the volume of the coil mesh, on the stability of the coil mesh in time. The method we apply to estimate the volume of the cerebral aneurysms is based on the generation of a 3-D geometrical model of the aneurysm from two biplane DSA images. This 3-D model can be seen as an stack of 2-D ellipsis. The volume of the aneurysm is the result of performing a numerical integration of this stack. The program was validated using balloons filled with contrast agent. The availability of 3-D data for some of the aneurysms enabled to perform a comparison of the results of this method with techniques based on 3-D data.
Calibration of an optical see-through head-mounted display with variable zoom and focus for applications in computer-assisted interventions
During the last few years head mounted displays (HMD) became more important in Computer assisted surgery (CAS). Rapid head movements of the surgeon enforce to change the focal plane and the zoom value without loosing the calibration. Starting from previous work in developing an optical see through head mounted display we adapted our HMD to measure the focal and zoom values. This made it possible to extend the calibration to different zoom and focus values. The case of the HMD was opened to gain access to the zoom lenses, which was necessary to measure the different zoom values. Focusing in our HMD is realized by changing the angle between the two tubes. Therefore we marked two points at the tubes to measure the focal adjustment. We made a series of planar calibrations with seven different fixed zoom and focus values using Tsai´s algorithm for camera calibration. Then we used the Polaris optical tracking system (Northern Digital, Ontario, Can) to measure the transformation from the planar calibration grid to a tracker probe rigidly mounted to the HMD. The calibration parameters transformed to this tracker probe are independent of the actual position of the calibration grid andare the parameters we want to approximate. Then least square approximating polynomial surfaces were derived for the seven calibration parameters. The coefficients of the polynomial surfaces were used as starting values for a nonlinear optimization procedure minimizing an overall error. Minimizing the object space error (which is the minimal distance of the line through the center of projection and the image point to the real world point) in the last step of the procedure described above we had a mean object space error 0.85 ±0.5 mm. Calibration of the HMD is not lost during minor changes in zoom and focus. This is likely to be the first optical see through HMD developed for CAS with variable zoom and focus, which are typically facilities of operating microscopes. Employing an automated calibration in common with more zoom and focus steps and more accurate measurement of the position of the zoom lenses and the focal plane should reduce the error significantly, enabling the accuracy needed for CAS.
Novel MTF measurement method for medical image viewers using a bar pattern image
A novel MTF(modulation transfer function) measurement method using a bar pattern image for medical image viewers such as DICOM viewer was developed. A bar-pattern image produced by a personal computer was displayed on a cathode-ray-tube (CRT) display and was imaged with a high resolution single-lens reflex digital camera equipped with a close-up lens. The discrete burred square-waveform data acquired from the imaged bar patterns were interpolated using the waveform reproduction technique with Fourier analysis in order to obtain interpolated wave curves. All of the measured pixel values in this process were converted into luminance data. The MTF was calculated from the amplitude values of the extracted basic frequency components in the square-waveform, in which an aliasing error was excluded. Actual measurements were performed with two models of medical image viewer equipped with monochrome displays. Horizontal and vertical MTFs at the central position of display area were measured up to Nyquist frequency. Resultant MTFs clearly indicated the difference in resolution for two viewers, as well as visual evaluation did. The standard deviations of MTF values of 5 measurements at Nyquist frequency were 0.004 and 0.01 for horizontal and vertical directions, respectively. Employment of a commercial single-lens reflex digital camera enabled easy and correct focusing and simple data handling. In conclusion, our method may be useful in the medical field due to good reproducibility and easy operativity.
Intensity-based registration and combined visualization of multimodal brain images for noninvasive epilepsy surgery planning
To visualize the brain anatomy, seizure focus location and grid and strip electrode in 3-dimensional space provides improved planning information for focus localization and margin determination pre- and intra-operatively. However given the relatively poor spatial resolution and structural detail of the PET images, it can be difficult to recognize precise anatomic localization of the site of increased activation during seizure. In this paper, we present an intensity-based registration and combined visualization of CT, MR and PET brain images to provide both critical functional information and the structural details.
Construction of the three-dimensional brain tumor model for operative treatment simulation by 3D Active Sphere
Shunrou Fujiwara, Akio Doi, Kouichi Matsuda, et al.
In this paper, we have proposed the construction method of the geometry model of the brain tumor by the three-dimensional(3-D) Active Sphere. 3-D Active Sphere, which is a kind of the energy-minimizing method, is able to extract the three-dimensional geometry model directly from the volume data sequences composed of the MRI slice images. The method enables us more accurate and direct extraction of a three-dimensional geometry model than the previous energy-minimizing method, because both the boundary and the interior information are used for the modification of the object in our method.
Intra-operative guidance with real-time information of open MRI and manipulators using coordinate-integration module
Michio Oikawa, Masami Yamasaki, Haruo Takeda, et al.
Assuming the surgery under open magnetic resonance imaging (MRI) equipment with manipulators, we developed the coordinate-integration module and the real-time functions that could display the manipulator's position on the volume data of MRI and could obtain the cross-section images of MRI at the manipulator's position. The small field of view from an endoscope is the problem in most of the minimally invasive surgeries with manipulators. Therefore, we propose an endoscopic surgery with manipulators under open MRI equipment. The coordinate-conversion parameters were calculated in the coordinate-integration module by calibration with an optical tracking system and markers. The delay of the manipulator-position display on the volume data was approximately within 0.5 second though it depended on the amount of the volume data. We could also obtain the cross-section images of MRI at the manipulator's position using the information from the coordinate-integration module. With these functions, we can cope with the change of the organ shape during surgery with the guidance based on the individual information. Furthermore, we can use the manipulator as an MRI probe to define cross-section position like an ultrasonic probe.
Volume rendering the neural network in an insect brain in confocal microscopic volume images
Fu-Chi Alex Ku, Yu-Tai Ching
Confocal microscopy is an important tool in neural science research. Using proper staining technique, the neural network can be visualized in the confocal microscopic images. It is a great help if neural scientists can directly visualize the 3D neural network. Volume render the neuron fibers is not easy since other objects such as neuropils are also polluted in the staining process and the neuron fiber is thin comparing to the background. Preprocessing of the image to enhance the neuron fibers before volume rendering can help to build a better 3D image of the neural network. In this study, we used the Fourier Transform, the Wavelet Transform, and the matched filter techniques to enhance the neural fibers before volume rendering is applied. Experimental results show that such preprocessing steps help to generate a more clear 3D images of the neural network.
Automatic flight path generation in a virtual colonoscopy system
Virtual colonoscopy is a computerized procedure to examine colonic polyps from a CT data set. To automatically fly through a long and complex-shaped colon with a virtual camera, we propose an efficient method to simultaneously generate view-positions and view-directions. After obtaining a 3-D binary colon model, we find an initial path that represents rough camera directions and positions along it. Then, by using this initial path, we generate control planes to find a set of discrete view-positions, and view planes to obtain the corresponding view-directions, respectively. Finally, for continuous and smooth navigation, the obtained view-positions and directions are interpolated using the B-spline method. Here, by imposing a constraint to control planes, penetration and collision can be avoided in the interpolated result. Effectiveness of the proposed algorithm is examined via computer simulations using the several phantoms to simulate the characteristics of human colon, namely, high-curvatures and complex structure. Simulation results show that the algorithm provides the view-positions and view-directions suitable for covering more 3-D surface area in the navigation. Also, prospective results are obtained for human colon data with a high processing speed of less than 1 minute with a 2 GHz standard PC.
A new frequency processing algorithm based on multi-resolution analysis
Daisuke Kaji, Chieko Sato, Akiko Kano, et al.
The purpose of this study was to develop a new frequency processing algorithm based on multiresolution analysis, a common method of signal analysis, and thus to improve the diagnostic image quality of digital x-ray imaging systems such as computed radiography (CR). The newly developed algorithm, termed 'hybrid processing', has three unique features. First, the original image is decomposed into plural frequency bands, and each frequency component is weighted and added back to the original image. Second, at the decomposition stage, rather than employ a conventional simple averaging filter, a binomial filter is used to create unsharp images. Third, an enhancement characteristic is established for each unsharp image based on a smooth compensation function predetermined by the density and contrast of the unsharp image. Hybrid processing was applied to approximately 500 clinical CR images, including images of the chest, abdomen, and extremities. Image decomposition, and the weighting and re-addition of its frequency components produced optimal renditions from low through high frequencies. Further, the binomial filter provided smooth frequency response with a reasonably short processing time. Combined with the enhancement characteristic established for each unsharp image, the binomial filter effectively reduced unfavorable noise enhancement and such artifacts as overshoot and undershoot.
Stationary grid pattern removal using 2D technique for moiré-free radiographic image display
Ryoji Sasada, Masahiko Yamada, Shoji Hara, et al.
The striped patterns are superimposed in the radiographic images exposed with the stationary grid. When those images are displayed on a monitor, the scaling process causes the low frequency moire patterns overlapped over the object shadow. To prevent these moire patterns, it is necessary to remove the grid patterns before scaling process. The 1-dimenstional filtering can remove the grid pattern, on the other hand it removes some diagnostic information too. We developed two different grid pattern removal processes using 2-dimensional technique. The 2-dimensional technique can localize the information 2-dimensionally in frequency domain, so that the localized information includes the grid information. So the 2-dimensional method can remove the grid pattern with minimum loss of diagnostic information. Quality of images processed by the two 2-dimensional methods and the conventional 1-dimensional filtering method were evaluated. No grid patterns were observed in the images processed by three methods. However, as compared with the 1-dimensional filtered image, the images processed by the 2-dimensional methods were much sharper and have more detail information.
Diagnostic clinical benefits of digital spot and digital 3D mammography following analysis of screening findings
Mari Lehtimaki, Martti Pamilo, Leena Raulisto, et al.
The purpose of this study is to find out the impact of 3-dimensional digital mammography and digital spot imaging following analysis of the abnormal findings of screening mammograms. Over a period of eight months, digital 3-D mammography imaging TACT Tuned Aperture Computed Tomography+, digital spot imaging (DSI), screen-film mammography imaging (SFM) and diagnostic film imaging (DFM) examinations were performed on 60 symptomatic cases. All patients were recalled because it was not possible to exclude the presence of breast cancer on screening films. Abnormal findings on the screening films were non-specific tumor-like parenchymal densities, parenchymal asymmetries or distortions with or without microcalcifications or just microcalcifications. Mammography work-up (film imaging) included spot compression and microfocus magnification views. The 3-D softcopy reading in all cases was done with Delta 32 TACT mammography workstation, while the film images were read using a mammography-specific light box. During the softcopy reading only windowing tools were allowed. The result of this study indicates that the clinical diagnostic image quality of digital 3-D and digital spot images are better than in film images, even in comparison with diagnostic work-up films. Potential advantages are to define if the mammography finding is caused by a real abnormal lesion or by superimposition of normal parenchymal structures, to detect changes in breast tissue which would otherwise be missed, to verify the correct target for biopsies and to reduce the number of biopsies performed.
Multimodality vascular imaging phantom for calibration purpose
Guy Cloutier, Gilles Soulez, Pierre Teppaz, et al.
The objective of the project was to design a vascular phantom compatible with X-ray, ultrasound and MRI. Fiducial markers were implanted at precise known locations in the phantom to facilitate identification and orientation of plane views from the 3D reconstructed images. They also allowed optimizing image fusion and calibration. A vascular conduit connected to tubing at the extremities of the phantom ran through an agar-based gel filling it. A vessel wall in latex was included to avoid diffusion of contrast agents. Using a lost-material casting technique based on a low melting point metal, complex realistic geometries of normal and pathological vessels were modeled. The fiducial markers were detectable in all modalities without distortion. No leak of gadolinium through the vascular wall was observed on MRI for 5h of scan. The potential use of the phantom for calibration, rescaling, and fusion of 3D images obtained from the different modalities as well as its use for the evaluation of intra and inter-modality comparative studies of imaging systems were recently demonstrated by our group (results published in SPIE-2003). Endovascular prostheses were also implanted into the lumen of the phantom to evaluate the extent of metallic imaging artifacts (results submitted elsewhere). In conclusion, the phantom can allow accurate calibration of radiological imaging devices and quantitative comparisons of the geometric accuracy of each radiological imaging method tested.
Multidimensional registration of x-ray image and CT image
Hui Zhang, Pascal Haigron, Huazhong Shu, et al.
We present a methodology for alignment of X-Ray image and CT image, based on chamfer 3-4 distance transform and simulated annealing optimization algorithm. The proposed approach firstly segments object’s structure from X-Ray image. Using projection model and optimization method, we deduce the correct projection matrix. This method is also integrated into medical intra-operation, dealing with the data set acquired from 3D image workstation and active navigation.
Independent component analysis assisted unsupervised multispectral classification
The goal of unsupervised multispectral classification is to precisely identify objects in a scene by incorporating the complementary information available in spatially registered multispectral images. If the channels are less noisy and are as statistically independent as possible, the performance of the unsupervised classifier will be better. The discriminatory power of the classifier also increases if the individual channels have good contrast. However, enhancing the contrast of the channels individually does not necessarily produce good results. Hence there is a need to preprocess the channels such that they have a high contrast and are as statistically independent as possible. Independent Component Analysis (ICA) is a signal processing technique that expresses a set of random variables as linear combinations of statistically independent component variables. The estimation of ICA typically involves formulating a cost function which measures nongaussianity/ gaussianity which is subsequently maximized or minimized. The resulting images are maximally statistically independent and have high contrast. Unsupervised classification on these images captures more information than on the original images. In preliminary studies, we were able to classify detailed neuroanatomical structures such as the putamen and choroid plexus, from the independent component channels. These structures could not be delineated from the original images using the same classifier.
Real-time MTF evaluation of displays in the clinical arena
Four methods for near real-time measurement of the modulation transfer function (MTF) of electronic displays are presented. The methods are based on measuring the display’s response to an edge, periodic bar-patterns, line and whitenoise stimuli. Although all the methods yield practically the same result, they require different data acquisition time and different degrees of human intervention while analyzing the acquired data. The paper presents a comparison between the four methods in context of the time required to implement each and cites implementation issues that need to be addressed in order to achieve real time data analysis and presentation.
Improving visualization of digital mammograms on a CRT display system
This paper discusses the subject of display-specific processign for improving visualization of digital mammograms on the CRT softcopy display systems. We designed and implemented an approach to process mammograms before the display to compensate for the Optical Transfer Function of the CRT and for its noise. A subsequent Receiver Operating Characteristics study demonstrated that the display-specific processing led to an increase in the efficiency of the softcopy diagnosis in the clinical environment.
Comparative visualization of digital mammograms on clinical 2K monitor workstations and hardcopy: a contrast detail analysis
Pavle Torbica, Wolfgang Buchberger, M. Bernathova, et al.
The purpose of this study was to compare the radiologist`s performance in detecting small low-contrast objects with hardcopy and softcopy reading of digital mammograms. 12 images of a contrast-detail (CD) phantom without and with 25.4 mm, 50.8 mm, and 76.2 mm additional polymethylmetacrylate (PMMA) attenuation were acquired with a caesium iodid/amorphous silicon flat panel detector under standard exposure conditions. The phantom images were read by three independent observers, by conducting a four-alternative forced-choice experiment. Reading of the hardcopy was done on a mammography viewbox under standardized reading conditions. For soft copy reading, a dedicated workstation with two 2K monitors was used. CD-curves and image quality figure (IQF) values were calculated from the correct detection rates of randomly located gold disks in the phantom. The figures were compared for both reading conditions and for different PMMA layers. For all types of exposures, soft copy reading resulted in significantly better contrast-detail characteristics and IQF values, as compared to hard copy reading of laser printouts. (p< 0.01). The authors conclude that the threshold contrast characteristics of digital mammograms displayed on high-resolution monitors are sufficient to make soft copy reading of digital mammograms feasible.
Method for placing deep-brain stimulators
Chris Nickele, Ebru Cetinkaya, J. Michael Fitzpatrick, et al.
A new system is evaluated for implanting deep-brain stimulators into the brain. The system relies on the custom construction of a rigid, one-piece mounting platform for each patient. During surgery the platform is attached rigidly to posts that are implanted into the patient's skull and extend outward through the scalp. The platform then acts as a miniature stereotactic frame that provides guidance for a catheter as it is advanced through a burr hole to the target. The target is selected on a pre-operative CT image. That image is acquired after the posts have been implanted and outfitted with fiducial markers. The positions of the markers and the target are used to design the platform. After initial implantation, the electrode's position is adjusted interoperatively on the basis of physical effects of stimulation, but the accuracy of the initial placement is determined entirely by the registration of the image to the physical anatomy through the shape of the platform and its placement on the posts. In this work, we test that accuracy by comparing the position of the electrode in post-operative patient images with the positions in the pre-operative images as determined by a rigid registration based on the fiducial markers.
Image smoothing with Savitzky-Golay filters
Noise in medical images is common. It occurs during the image formation, recording, transmission, and subsequent image processing. Image smoothing attempts to locally preprocess these imagse primarily to suppress image noise by making use of the redundancy in the image data. 1D Savitzky-Golay filtering provides smoothing without loss of resolution by assuming that the distant points have significant redundancy. This redundancy is exploited to reduce the noise level. Using this assumed redundancy, the underlying functin is locally fitted by a polynomial whose coefficients are data independent and hence can be calculated in advance. Geometric representations of data as a patches and surfaces have been used in volumetric modeling and reconstruction. Similar representations could also be used in image smoothing. This paper shows the 2D and 3D extensions of 1D Savitzky-Golay filters. The idea is to fit a 2D/3D polynomial to a 2D/3D sub region of the image. As in the 1D case, the coefficients of the polynomial are computed a priori with a linear filter. The filter coefficients preserve higher moments. The coefficients always have a central positive lobe with smaller outlying corrections of both positive and negative magnitudes. To show the efficacy of this smoothing, it is used in-line with volume rendering while computing the sampling points and the gradient.
Automatic crest line extraction from anatomical surfaces
Crest lines are shape features with high significance on anatomical surfaces. They implicitly follow the gyri and the fundi of sulci on the cerebral cortex. In this work, an automatic crest line extraction algorithm is presented and show two useful applications, rigid registration and surface simplification.
Displaying brain atlases using a portable Java application: the antomist
Haihong Zhuang, Jacopo Annese, Daniel J. Valentino, et al.
Brain atlases are intended to provide a framework for the integration of structural, functional and clinical information. Template-based atlases (2-D and 3-D) consist of representative outlines obtained from the delineation of structural borders in anatomical images. The templates can be overlaid by other data such as statistical maps of activation or results from quantitative neuroanatomical studies. Much more information can be conveyed by simultaneously displaying correlated data sets than by cross-referencing individual images. We developed an application, the Anatomist, which was used to interactively display complex brain atlas images and provide access to correlated information that described the images. The Anatomist was implemented in Java using a portable imaging framework (the jViewbox) that provided the decoders for common medical image file formats and the image display and manipulation tools needed to implement an intuitive and interactive graphical user interface for manipulating brain atlas data. We also implemented functions to composite multiple structural and functional data sets and to present the data in axial, coronal and sagittal orientations. Each data set was either viewed individually or the transparency of each layer was adjusted so as to view multiple data sets simultaneously. The Anatomist is a user-friendly tool that facilitates the presentation of complex 3-D brain image atlases.
Use of CAD output to guide the intelligent display of digital mammograms
Aili K. Bloomquist, Martin Joel Yaffe, Gordon E. Mawdsley, et al.
For digital mammography to be efficient, methods are needed to choose an initial default image presentation that maximizes the amount of relevant information perceived by the radiologist and minimizes the amount of time spent adjusting the image display parameters. The purpose of this work is to explore the possibility of using the output of computer aided detection (CAD) software to guide image enhancement and presentation. A set of 16 digital mammograms with lesions of known pathology was used to develop and evaluate an enhancement and display protocol to improve the initial softcopy presentation of digital mammograms. Lesions were identified by CAD and the DICOM structured report produced by the CAD program was used to determine what enhancement algorithm should be applied in the identified regions of the image. An improved version of contrast limited adaptive histogram equalization (CLAHE) is used to enhance calcifications. For masses, the image is first smoothed using a non-linear diffusion technique; subsequently, local contrast is enhanced with a method based on morphological operators. A non-linear lookup table is automatically created to optimize the contrast in the regions of interest (detected lesions) without losing the context of the periphery of the breast. The effectiveness of the enhancement will be compared with the default presentation of the images using a forced choice preference study.
Rotation invariance principles in 2D/3D registration
Wolfgang Birkfellner, Joachim Wirth, Wolfgang Burgstaller, et al.
2D/3D patient-to-computed tomography (CT) registration is a method to determine a transformation that maps two coordinate systems by comparing a projection image rendered from CT to a real projection image. Applications include exact patient positioning in radiation therapy, calibration of surgical robots, and pose estimation in computer-aided surgery. One of the problems associated with 2D/3D registration is the fast that finding a registration includes sovling a minimization problem in six degrees-of-freedom in motion. This results in considerable time expenses since for each iteration step at least one volume rendering has to be computed. We show that by choosing an appropriate world coordinate system and by applying a 2D/2D registration method in each iteration step, the number of iterations can be grossly reduced from n6 to n5. Here, n is the number of discrete variations aroudn a given coordinate. Depending on the configuration of the optimization algorithm, this reduces the total number of iterations necessary to at least 1/3 of its original value. The method was implemented and extensively tested on simulated x-ray images of a pelvis. We conclude that this hardware-indepenent optimization of 2D/3D registration is a step towards increasing the acceptance of this promising method for a wide number of clinical applications.
Dose optimization tool
Ornit Amir, David Braunstein, Ami Altman
A dose optimization tool for CT scanners is presented using patient raw data to calculate noise. The tool uses a single patient image which is modified for various lower doses. Dose optimization is carried out without extra measurements by interactively visualizing the dose-induced changes in this image. This tool can be used either off line, on existing image(s) or, as a pre - requisite for dose optimization for the specific patient, during the patient clinical study. The algorithm of low-dose simulation consists of reconstruction of two images from a single measurement and uses those images to create the various lower dose images. This algorithm enables fast simulation of various low dose (mAs) images on a real patient image.