Proceedings Volume 5744

Medical Imaging 2005: Visualization, Image-Guided Procedures, and Display

cover
Proceedings Volume 5744

Medical Imaging 2005: Visualization, Image-Guided Procedures, and Display

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 12 April 2005
Contents: 12 Sessions, 98 Papers, 0 Presentations
Conference: Medical Imaging 2005
Volume Number: 5744

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Segmentation and Rendering
  • Poster Session
  • Diaphragm and Abdomen
  • Registration
  • Modeling
  • Localization
  • Display
  • Cardiac Imaging
  • Neuro I
  • Ultrasound
  • Visualization
  • Neuro II
  • Poster Session
  • Segmentation and Rendering
  • Poster Session
Segmentation and Rendering
icon_mobile_dropdown
Techniques on semiautomatic segmentation using the Adobe Photoshop
Jin Seo Park, Min Suk Chung, Sung Bae Hwang
The purpose of this research is to enable anybody to semiautomatically segment the anatomical structures in the MRIs, CTs, and other medical images on the personal computer. The segmented images are used for making three-dimensional images, which are helpful in medical education and research. To achieve this purpose, the following trials were performed. The entire body of a volunteer was MR scanned to make 557 MRIs, which were transferred to a personal computer. On Adobe Photoshop, contours of 19 anatomical structures in the MRIs were semiautomatically drawn using MAGNETIC LASSO TOOL; successively, manually corrected using either LASSO TOOL or DIRECT SELECTION TOOL to make 557 segmented images. In a likewise manner, 11 anatomical structures in the 8,500 anatomcial images were segmented. Also, 12 brain and 10 heart anatomical structures in anatomical images were segmented. Proper segmentation was verified by making and examining the coronal, sagittal, and three-dimensional images from the segmented images. During semiautomatic segmentation on Adobe Photoshop, suitable algorithm could be used, the extent of automatization could be regulated, convenient user interface could be used, and software bugs rarely occurred. The techniques of semiautomatic segmentation using Adobe Photoshop are expected to be widely used for segmentation of the anatomical structures in various medical images.
Automated segmentation of the lungs from high resolution CT images for quantitative study of chronic obstructive pulmonary diseases
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient’s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
Smooth volume rendering of labeled medical data on consumer graphics hardware
F. Vega Higuera, P. Hastreiter, R. Naraghi, et al.
One of the most important applications of direct volume rendering is the visualization of labeled medical data. Explicit segmentation of embedded subvolumes allows a clear separation of neighboring substructures in the same range of intensity values, which can then be used for implicit segmentation of fine structures using transfer functions. Nevertheless, the hard label boundaries of explicitly segmented structures lead to voxelization artifacts. Pixel-resolution linear filtering can not solve this problem effectively. In order to render soft label boundaries for explicitly segmented objects, we have successfully applied a smoothing algorithm based on gradients of the volumetric label data as a preprocessing step. A 3D-texture based rendering approach was implemented, where volume labels are interpolated independently of each other using the graphics hardware. Thereby, correct trilinear interpolation of four subvolumes is obtained. Per-label post-interpolative transfer functions together with inter-label interpolation are performed in the pixel shader stage in a single rendering pass, hence obtaining high-quality rendering of labeled data on GPUs. The presented technique showed its high practical value for the 3D-visualization of tiny vessel and nerve structures in MR data in case of neurovascular compression syndromes.
T-shell rendering and manipulation
Previously shell rendering was shown to be an ultra fast method of rendering digital surfaces represented as sets of voxels. The purpose of this work is to describe a new extension to the shell rendering method that creates a new representation of surfaces called t-shells using triangulated surface elements. The great speed of the shell rendering technique is made available to rendering triangulated surfaces through the use of data structures that describe the shell and through algorithms that traverse this information to produce the two-dimensional projection of the three-dimensional data. In traditional shell rendering, each shell element (a voxel) is represented by a triple comprised of the offset of the voxel from the start of the row, the neighborhood code of the voxel, and the surface normal. We modify this data structure by replacing the neighborhood code with a code that indicates the Marching Cubes configuration of polygons (1 of 256 possible triangulated configurations) within that area (rather than the rasterization of a single, uniform shell element as was originally done). We present the general t-shell algorithm as well as the results of the two implementations as applied to input data which consist of different objects from various parts of the body and various modalities with a variety of surface sizes and shapes. We present some sample renditions as well as the results of timing experiments which demonstrate that the t-shell rendering is not only much faster than hardware-based rendering but is also capable of handling scenes that are much larger than those the hardware is capable of rendering while producing renditions of comparable quality.
Poster Session
icon_mobile_dropdown
3D segmentation and visualization of lung volume using CT
Haibo Zhang, Xuejun Sun, Huichuan Duan
Three-dimensional (3D)-based detection and diagnosis plays important role in significantly improving detection and diagnosis of lung cancers through computed tomography (CT). This paper presents a 3D approach for segmenting and visualizing lung volume by using CT images. An edge-preserving filter (3D sigma filter) is first performed on CT slices to enhance the signal-to-noise ratio, and wavelet transform (WT)-based interpolation incorporated with volume rendering is utilized to construct 3D volume data. Then an adaptive 3D region-growing algorithm is designed to segment lung mask incorporated with automatic seed locating algorithm through fuzzy logic algorithm, in which 3D morphological closing algorithm is performed on the mask to fill out cavities. Finally, a 3D visualization tool is designed to view the volume data, its projections or intersections at any angle. This approach was tested on single detector CT images and the experiment results demonstrate that it is effective and robust. This study lays groundwork for 3D-based computerized detection and diagnosis of lung cancer with CT imaging. In addition, this approach can be integrated into PACS system serving as a visualization tool for radiologists’ reading and interpretation.
Diaphragm and Abdomen
icon_mobile_dropdown
3D geometrical segmentation and reconstruction of anatomical structures
G. Bueno, C. Flores, A. Martinez, et al.
In this paper an improved method for 3D surface reconstruction from 2D images together with the 2D/3D segmentation method is presented. The method aims to model and parameterise anatomical structures previously segmented from medical images. The whole automatic method may be divided into segmentation and reconstruction, which is carried out in four steps: 1) Geodesic based segmentation of the anatomical structures under consideration, 2) Contour representation and simplification based on Douglas-Peucker algorithm, 3) Global 3D incremental Delaunay Triangulation and 4) a final sculpting process of the non-object tetrahedral. The sculpting process is based on the spatial information and contour parameterisation obtained in the segmentation stage. Our 3D reconstruction method is global and free of parameters, which is not the case of other 3D reconstruction techniques. Moreover, it allows to include at any time vertex and contour points information. Results for CT and MR images are presented. Geodesic 3D based segmentation is compared to the reconstruction result. A comparison between 3D reconstructions and 3D segmentation is presented.
Organ motion due to respiration: the state of the art and applications in interventional radiology and radiation oncology
Kevin R. Cleary, Maureen Mulcahy, Rohan Piyasena, et al.
Tracking organ motion due to respiration is important for precision treatments in interventional radiology and radiation oncology, among other areas. In interventional radiology, the ability to track and compensate for organ motion could lead to more precise biopsies for applications such as lung cancer screening. In radiation oncology, image-guided treatment of tumors is becoming technically possible, and the management of organ motion then becomes a major issue. This paper will review the state-of-the-art in respiratory motion and present two related clinical applications. Respiratory motion is an important topic for future work in image-guided surgery and medical robotics. Issues include how organs move due to respiration, how much they move, how the motion can be compensated for, and what clinical applications can benefit from respiratory motion compensation. Technology that can be applied for this purpose is now becoming available, and as that technology evolves, the subject will become an increasingly interesting and clinically valuable topic of research.
Validation of 3D motion tracking of pulmonary lesions using CT fluoroscopy images for robotically assisted lung biopsy
Sheng Xu, Gabor Fichtinger, Russell H. Taylor, et al.
As recently proposed in our previous work, the two-dimensional CT fluoroscopy image series can be used to track the three-dimensional motion of a pulmonary lesion. The assumption is that the lung tissue is locally rigid, so that the real-time CT fluoroscopy image can be combined with a preoperative CT volume to infer the position of the lesion when the lesion is not in the CT fluoroscopy imaging plane. In this paper, we validate the basic properties of our tracking algorithm using a synthetic four-dimensional lung dataset. The motion tracking result is compared to the ground truth of the four-dimensional dataset. The optimal parameter configurations of the algorithm are discussed. The robustness and accuracy of the tracking algorithm are presented. The error analysis shows that the local rigidity error is the principle component of the tracking error. The error increases as the lesion moves away from the image region being registered. Using the synthetic four-dimensional lung data, the average tracking error over a complete respiratory cycle is 0.8 mm for target lesions inside the lung. As a result, the motion tracking algorithm can potentially alleviate the effect of respiratory motion in CT fluoroscopy-guided lung biopsy.
Cartwheel projections of segmented pulmonary vasculature for the detection of pulmonary embolism
Atilla Peter Kiraly, David P. Naidich, Carol L. Novak
Pulmonary embolism (PE) detection via contrast-enhanced computed tomography (CT) images is an increasingly important topic of research. Accurate identification of PE is of critical importance in determining the need for further treatment. However, current multi-slice CT scanners provide datasets typically containing 600 or more images per patient, making it desirable to have a visualization method to help radiologists focus directly on potential candidates that might otherwise have been overlooked. This is especially important when assessing the ability of CT to identify smaller, sub-segmental emboli. We propose a cartwheel projection approach to PE visualization that computes slab projections of the original data aided by vessel segmentation. Previous research on slab visualization for PE has utilized the entire volumetric dataset, requiring thin slabs and necessitating the use of maximum intensity projection (MIP). Our use of segmentation within the projection computation allows the use of thicker slabs than previous methods, as well as the ability to employ visualization variations that are only possible with segmentation. Following automatic segmentation of the pulmonary vessels, slabs may be rotated around the X-, Y- or Z-axis. These slabs are rendered by preferentially using voxels within the lung vessels. This effectively eliminates distracting information not relevant to diagnosis, lessening both the chance of overlooking a subtle embolus and minimizing time on spent evaluating false positives. The ability to employ thicker slabs means fewer images need to be evaluated, yielding a more efficient workflow.
Absolute alignment of breathing states using image similarity derivatives
The fusion of information in medical imaging relies on accurate registration of the image content coming often from different sources. One of the strongest influences on the movement of organs is the patient’s respiration. It is known, that respiration status can be measured by comparing the projection images of the chest. Since the diaphragm compresses the soft tissue above, the level of similarity to a reference projection image in extremely inhaled or exhaled status gives an indication of the patient’s respiration status. If the images to be registered are generated under different conditions the similarity with a common reference image is calculated on different scales and therefore cannot be compared directly. The proposed solution uses two reference images acquired in extremely inhaled and exhaled position. By comparing the images with two references and by combining the similarity results, changes in respiration depth between acquisitions can be detected. With normal breathing, the similarity to one of the reference images increases while the similarity to the other one decreases over time or vice versa. If the patient’s respiration exceeds the respiration span of the reference images, the similarity to both reference images decreases. By using not only the similarity values but also their derivatives over time, changes in respiration depth therefore can be detected and the image fusion algorithm can act accordingly e.g. by removing images that exceed the valid respiration span.
Circular needle and needle-holder localization for computer-aided suturing in laparoscopic surgery
Florent Nageotte, Christophe Doignon, Michel de Mathelin, et al.
Usual localization and registration techniques cannot be used for suturing in laparoscopic surgery. The small size of the needle and the interactions with the tissues do not allow to use conventional sensors. Moreover, the possible modifications of the positions of the needle after the needle has been introduced into the abdomen, makes the use of usual vision-based techniques impossible. In this paper, we present methods to obtain the necessary information by using a color endoscopic camera. So as to simplify the detection and reconstruction problems, the needle-holder is modelled as a cylinder and equipped with passive markers, and the needle is colored. Image processing techniques allow to get an elliptical representation of the image of the needle. From this ellipse and apparent contours of the cylinder, the 3D poses can be obtained. These poses and the needle handling parameters are computed by minimizing the projection error in the images and by using a numerical iterative technique: virtual visual servoing.
Registration
icon_mobile_dropdown
Cortex registration: a geometric approach
We have devised and implemented a novel method for calculating the high dimensional field relating the brain surfaces (cortex) of an arbitrary pair of subjects. This non-rigid registration method uses crest lines as anatomical landmarks and various geometrical techniques such as geodesic voronoi diagram, barycentric flattening and barycentric coordinates to partition the cortex into meaningful regions and achieve a spatially accurate matching. Finally, the multilevel B-splines method is used to compute a C2 continuous warping field.
A mutual-information-based registration algorithm for ultrasound-guided computer-assisted orthopaedic surgery
Thomas Kuiran Chen, Purang Abolmaesumi
This paper presents a novel approach and its preliminary laboratory results for the employment of ultrasound (US) imaging in intraoperative guidance of computer-assisted orthopaedic surgeries (CAOS). The goal is to register live intraoperative US images with preoperative surgical planning data using minimal number of images. Preoperatively, a set of 2D US images are acquired with the corresponding positional information of the US probe provided by an optical tracking system. Using calibration parameters, the position of every pixel in the acquired images is transformed into the world coordinate frame to construct a 3D volumetric representation of the targeted anatomy for surgical planning. Intraoperatively, the surgeon takes live US images from the patient with the position of the US probe tracked in real time. A mutual-information-based registration algorithm is then used to find the closest match to the live image in the preoperative US image database. Because the position of the preoperative image inside the US volume is known, we are able to register the preoperative US volume to the live image, thus to the patient. Experiments have shown the registration algorithm has sub-millimeter accuracy in localizing the best match between the intraoperative and pre-operative images, demonstrating great potential for orthopaedic surgery applications. This method has some significant advantages over the previously reported US-guided CAOS techniques: it requires no segmentation, and employs only a few intraoperative images to accurately and robustly localize the patient. Preliminary laboratory results on both a Sawbones model of a radius bone and human subjects are presented.
Image-guided multi-modality registration and visualization for breast cancer detection
James Qingyang Zhang, John M. Sullivan Jr., Hongliang Yu, et al.
It is crucial that breast cancer be detected in its earlier and more curable stages of development. New imaging modalities are emerging, such as electrical impedance spectroscopy (EIS), microwave imaging and spectroscopy (MIS), magnetic resonance elastography (MRE), and near-infrared (NIR) imaging. These alternative imaging modalities strive to alleviate limitations of traditional screening and diagnostic tools on dense breast tissue and detection of small abnormalities. The purpose of this study is to combine the results from alternative imaging modalities with T1 and T2-weighted MR Imaging. Two categories of data are presented, pixel data (MRIs) and geometry model with scalar values (MRE and MIS). Three dimensional mesh models (surface/volume meshes) are generated using the automatic mesh generator for biological models developed in the laboratory. A graphic user interface (GUI) for medical image processing powered by Visualization Toolkit (VTK) was developed which supports interactive and automatic image registration, image volume manipulation and geometry rendering. Registration of image/image and image/geometry is a fundamental requirement for multi-spectral data visualization within the same workspace. Various physical properties can be visualized to reveal the correlations between alternative imaging modalities and subsequently for breast tissue classification. A registration strategy was implemented using T1 and T2-weighted MR data as the standard subject. It combined automated image registration (AIR) with interactive registration routines. The final synthetic datasets are rendered in 3D views. This framework was created for multi-modality breast imaging data registration and visualization. The aligned image/geometry data facilitate breast tissue classification.
Fiducial registration for tracking systems that employ coordinate reference frames
An algorithm is presented that is designed for image-guidance systems that employ coordinate reference frames. It is common in image-guided surgery (IGS) to use a tracking system to determine the position of fiducial markers in physical space. Typically a “Coordinate Reference Frame” (CRF), is also employed, which is rigidly attached to the object being tracked. The positions of markers attached to the object are then measured in physical space relative to the CRF, and hence it is acceptable to allow the object to move during tracking. It is known that errors are introduced while localizing markers in image space and also while localizing markers in physical space using a probe. The use of a CRF causes additional error, which is anisotropic in nature and varies with the position of the marker being tracked relative to the CRF. This additional error has heretofore not been accounted for in the process of registering image space to physical space. We present in this paper a new rigid-body, point-based registration algorithm that accounts for the fiducial localization errors that arise in tracking systems that employ a coordinate reference frame. Simulations are presented that show that for such systems the new algorithm has the capability to perform better than the standard registration algorithm. The effect is enhanced for small CRFs and for marker configurations that are widely spaced relative to their mean distance from the CRF.
Distortion correction, calibration, and registration: toward an integrated MR and x-ray interventional suite
Luis F. Gutierrez, Guy Shechter, Robert J. Lederman, et al.
We present our co-registration results of two complementary imaging modalities, MRI and X-ray angiography (XA), using dual modality fiducial markers. Validation experiments were conducted using a vascular phantom with eight fiducial markers around its periphery. Gradient-distortion-corrected 3D MRI was used to image the phantom and determine the 3D locations of the markers. XA imaging was performed at various C-arm orientations. These images were corrected for geometric distortion, and projection parameters were optimized using a calibration phantom. Closed-form 3D-to-3D rigid-body registration was performed between the MR markers and a 3D reconstruction of the markers from multiple XA images. 3D-to-2D registration was performed using a single XA image by projecting the MR markers onto the XA image and iteratively minimizing the 2D errors between the projected markers and their observed locations in the image. The RMS registration error was 0.77 mm for the 3D-to-3D registration, and 1.53 pixels for the 3D-to-2D registration. We also showed that registration can be performed at a large IS where many markers are visible, then the image can be zoomed in maintaining the registration. This requires calibration of imperfections in the zoom operation of the image intensifier. When we applied the registration used for an IS of 330 mm to an image acquired with an IS of 130 mm, the error was 42.16 pixels before zoom correction and 3.37 pixels after. This method offers the possibility of new therapies where the soft-tissue contrast of MRI and the high-resolution imaging of XA are both needed.
Modeling
icon_mobile_dropdown
Image guided constitutive modeling of the silicone brain phantom
Alexander Puzrin, Oskar Skrinjar, Cem Ozan, et al.
The goal of this work is to develop reliable constitutive models of the mechanical behavior of the in-vivo human brain tissue for applications in neurosurgery. We propose to define the mechanical properties of the brain tissue in-vivo, by taking the global MR or CT images of a brain response to ventriculostomy - the relief of the elevated intracranial pressure. 3D image analysis translates these images into displacement fields, which by using inverse analysis allow for the constitutive models of the brain tissue to be developed. We term this approach Image Guided Constitutive Modeling (IGCM). The presented paper demonstrates performance of the IGCM in the controlled environment: on the silicone brain phantoms closely simulating the in-vivo brain geometry, mechanical properties and boundary conditions. The phantom of the left hemisphere of human brain was cast using silicon gel. An inflatable rubber membrane was placed inside the phantom to model the lateral ventricle. The experiments were carried out in a specially designed setup in a CT scanner with submillimeter isotropic voxels. The non-communicative hydrocephalus and ventriculostomy were simulated by consequently inflating and deflating the internal rubber membrane. The obtained images were analyzed to derive displacement fields, meshed, and incorporated into ABAQUS. The subsequent Inverse Finite Element Analysis (based on Levenberg-Marquardt algorithm) allowed for optimization of the parameters of the Mooney-Rivlin non-linear elastic model for the phantom material. The calculated mechanical properties were consistent with those obtained from the element tests, providing justification for the future application of the IGCM to in-vivo brain tissue.
Directional volume growing for the extraction of white matter tracts from diffusion tensor data
D. Merhof, P. Hastreiter, C. Nimsky, et al.
Diffusion tensor imaging measures diffusion of water in tissue. Within structured tissue, such as neural fiber tracts of the human brain, anisotropic diffusion is observed since the cell membranes of the long cylindric nerves restrict diffusion. Diffusion tensor imaging thus provides information about neural fiber tracts within the human brain which is of major interest for neurosurgery. However, the visualization is a challenging task due to noise and limited resolution of the data. A common visualization strategy of white matter is fiber tracking which utilizes techniques known from flow visualization. The resulting streamlines provide a good impression of the spatial relation of fibers and anatomy. Therefore, they are a valuable supplement for neurosurgical planning. As a drawback, fibers may diverge from the exact path due to numerical inaccuracies during streamline propagation even if higher order integration is used. To overcome this problem, a novel strategy for directional volume growing is presented which enables the extraction of separate tract systems and thus allows to compare and estimate the quality of fiber tracking algorithms. Furthermore, the presented approach is suited to get a more precise representation of the volume encompassing white matter tracts. Thereby, the entire volume potentially containing fibers is provided in contrast to fiber tracking which only shows a more restricted representation of the actual volume of interest. This is of major importance in brain tumor cases where white matter tracts are in the close vicinity of brain tumors. Overall, the presented strategy contributes to make surgical planning safer and more reliable.
4D motion models over the respiratory cycle for use in lung cancer radiotherapy planning
Respiratory motion causes problems of tumour localisation in radiotherapy treatment planning for lung cancer patients. We have developed a novel method of building patient specific motion models, which model the movement and non-rigid deformation of a lung tumour and surrounding lung tissue over the respiratory cycle. Free-breathing (FB) CT scans are acquired in cine mode, using 3 couch positions to acquire contiguous 'slabs' of 16 slices covering the region of interest. For each slab, 20 FB volumes are acquired over approx 20s. A reference volume acquired at Breath Hold (BH) and covering the whole lung, is non-rigidly registered to each of the FB volumes. The FB volumes are assigned a position in the respiratory cycle (PRC) calculated from the displacement of the chest wall. A motion model is then constructed for each slab, by fitting functions that temporally interpolate the registration results over the respiratory cycle. This can produce a prediction of the lung and tumour within the slab at any arbitrary PRC. The predictions for each of the slabs are then combined to produce a volume covering the whole region of interest. Results indicate that the motion modelling method shows considerable promise, offering significant improvement over current clinical practice, and potential advantages over alternative 4D CT imaging techniques. Using this framework, we examined and evaluated several different functions for performing the temporal interpolation. We believe the results of these comparisons will aid future model building for this and other applications.
Parametric modeling for quantitative analysis of pulmonary structure to function relationships
Clifton R. Haider, Brian J. Bartholmai M.D., David R. Holmes III, et al.
While lung anatomy is well understood, pulmonary structure-to-function relationships such as the complex elastic deformation of the lung during respiration are less well documented. Current methods for studying lung anatomy include conventional chest radiography, high-resolution computed tomography (CT scan) and magnetic resonance imaging with polarized gases (MRI scan). Pulmonary physiology can be studied using spirometry or V/Q nuclear medicine tests (V/Q scan). V/Q scanning and MRI scans may demonstrate global and regional function. However, each of these individual imaging methods lacks the ability to provide high-resolution anatomic detail, associated pulmonary mechanics and functional variability of the entire respiratory cycle. Specifically, spirometry provides only a one-dimensional gross estimate of pulmonary function, and V/Q scans have poor spatial resolution, reducing its potential for regional assessment of structure-to-function relationships. We have developed a method which utilizes standard clinical CT scanning to provide data for computation of dynamic anatomic parametric models of the lung during respiration which correlates high-resolution anatomy to underlying physiology. The lungs are segmented from both inspiration and expiration three-dimensional (3D) data sets and transformed into a geometric description of the surface of the lung. Parametric mapping of lung surface deformation then provides a visual and quantitative description of the mechanical properties of the lung. Any alteration in lung mechanics is manifest by alterations in normal deformation of the lung wall. The method produces a high-resolution anatomic and functional composite picture from sparse temporal-spatial methods which quantitatively illustrates detailed anatomic structure to pulmonary function relationships impossible for translational methods to provide.
Configuration-space technique for calculating stent-fitness measures for the planning of neuro-endovascular interventions
Thenkurussi Kesavadas, Rajendra Agrawal, Kenneth R. Hoffmann
This paper demonstrates a new technique to compute stent-fitness measures for a vascular anatomy, using geometric information. This technique will aid the interventionalist in treatment planning for Neuro-endovascular interventions. Patient-specific vessel-surface reconstruction is performed from point/contour data without user intervention. The technique developed is based on configuration-space algorithms, which are widely used in robot motion planning. A fitness measure is computed for stents with various parameters for a patient-specific vessel data. Finally, a simulation is performed to check for collisions. This feature will provide an additional tool to the interventionalist for the planning of neuro-endovascular interventions, with the dimensions of the stent based on proximal and distal neck of the aneurysm for a patient-specific vascular anatomy.
Implant shape optimization using reverse FEA
Evgeny Gladilin, A. Ivanov, V. Roginsky
This work presents a novel approach for the physically-based optimization of individual implants in cranio-maxillofacial surgery. The proposed method is based on solving an inverse boundary value problem of the cranio-maxillofacial surgery planning, i.e. finding an optimal implant shape for a desired correction of soft tissues. The paper describes the methodology for the generation of individual geometrical models of human head, the reverse finite element approach for solving biomechanical boundary value problems and two clinical studies dealing with the computer aided design of individual craniofacial implants.
Localization
icon_mobile_dropdown
Ultrasound-guided ablation system for laparoscopic liver surgery
Philip Bao M.D., Tuhin K. Sinha, Chun-Cheng R. Chen, et al.
This work describes the design and implementation of a system for liver tumor ablation guided by ultrasound. Features of the system include spatially registered ultrasound visualization, ultrasound volume reconstruction, and interactive targeting. Early results with phantom experiments indicate a targeting accuracy of 5-10mm. The system serves as a foundation for further clinical studies and applications of image-guided therapy to liver procedures.
New form factors for sensors and field generators of a magnetic tracking system
Stefan R. Kirsch, Christian Schilling, Georg Brunner
Magnetic tracking systems can be considered an enabling technology for many image guided medical interventions, since they are not limited by line-of-sight requirements. This can allow a much deeper immersion of the technology into a particular medical navigation application. We demonstrate in this paper new prototype sensor and field generator form factors. Miniaturized sensors as small as 0.5 x 5 mm can allow integration of magnetic tracking systems into such instruments as biopsy needles, endoscopes, catheters, and guide wires. Sensors with hollow cores can surround instruments without taking up the space needed for other functions of the instrument. Such an approach shows that sensor miniaturization is not the only way to overcome space limitations in medical instruments. Flat field generators can simplify the setup of the tracking system and better optimize the location of the working volume relative to the field generator. For example, flat field generators could be built into surgical beds or into head rests. For the prototype systems considered in this paper, we discuss performance attributes such as their trueness, repeatability, and confidence limits in comparison to the standard field generator and sensors of the Aurora system of Northern Digital.
Tissue localization using endoscopic laser projection for image-guided surgery
David B. Kynor, Eric M. Friets, Darin A. Knaus, et al.
Image-guided surgery has led to more accurate lesion targeting and improved outcomes in neurosurgery. However, adaptation of the technology to other forms of surgery has been slow largely due to difficulties in determining the position of anatomic landmarks within the surgical field. The ability to localize anatomic landmarks and provide real-time tracking of tissue motion without placing additional demands on the surgeon will facilitate image-guided surgery in a variety of clinical disciplines. Even approximate localization of anatomic landmarks would benefit many forms of surgery. For example, liver surgeons could visualize intraoperative locations on preoperative CT or MR scans to assist them in navigating through the complex hepatic vascular network. This paper describes the initial stages of development of an endoscopic localization system for use during minimally-invasive, image-guided abdominal surgery. The system projects a scanned laser beam through a conventional endoscope. The projected laser spot is then observed using a second endoscope orientated obliquely to the projecting endoscope. Knowledge of the optical geometry of the endoscopes, along with their relative positions in space, allows determination of the three-dimensional coordinates of the illuminated point. The ultimate accuracy of the system is dependent on the geometric relationship between the endoscopes, the ability to accurately measure the position of each endoscope, and careful calibration of the optics used to project the laser beam. We report a system design intended to support automated operation, methods and initial results of measurement of target points, and preliminary data characterizing the performance of the system.
Standardized evaluation method for electromagnetic tracking systems
The major aim of this work was to define a protocol for evaluation of electromagnetic tracking systems (EMTS). Using this protocol we compared two commercial EMTS: the Ascension microBIRD (B) and NDI Aurora (A). To enable reproducibility and comparability of the assessments a machined base plate was designed, in which a 50 mm grid of holes is precision drilled for position measurements. A circle of 32 equispaced holes in the center enables the assessment of rotation. A small mount which fits into pairs of grid holes on the base plate is used to mount the sensor in a defined and rigid way. Relative positional/orientational errors are found by subtracting the known distances/rotations between the machined locations from the differences of the mean observed positions/rotation. To measure the influence of metallic objects we inserted rods (made of SST 303, SST 416, aluminum, and bronze) into the sensitive volume between sensor and emitter. Additionally the dynamic behavior was tested by using an optical sensor mounted on a spacer in a distance of 150 mm to the EMTS sensors. We found a relative positional error of 0.96mm +/- 0.68mm, range -0.06mm;2.23mm (A) and 1.14mm +/- 0.78mm, range -3.72mm;1.57mm (B) for a give distance of 50 mm. The positional jitter amounted to 0.14 mm(A) / 0.20mm (B). The relative rotation error was found to be 1.81 degrees(A) / 0.63 degrees(B). For the dynamic behavior we calculated an error of 1.63mm(A)/1.93mm(B). The most relevant distortion caused by metallic objects results from SST 416. The maximum error 4.2mm(A)/41.9mm(B) occurs when the rod is close to the sensor(20mm).
CTBot: A stereotactic-guided robotic assistant for percutaneous procedures of the abdomen
Benjamin Maurin, Christophe Doignon, Jacques Gangloff, et al.
This article presents positioning results of a stereotactic robotic assistant for percutaneous needle insertions in the abdomen. The robotic system, called the CT-Bot, is succinctly described. This mechanically safe device is compatible with medical requirements and offers a novel approach robotic needle insertion with computed tomography guidance. Our system does self-registration using only visual information from a fiducial marker. The theoretical developments explain how the pose reconstruction is done using only four fiducial points and how the automatic registration algorithm is achieved. The results concern the automatic positioning of the tip of a needle with respect to a reference point selected in a CT-image. The accuracy of the positioning result show how interesting this system is for clinical use.
Display
icon_mobile_dropdown
Increasing contrast resolution and decreasing spatial noise for liquid-crystal displays using digital dithering
Jiahua Fan, Hans Roehrig, Malur K. Sundareshan, et al.
Active-Matrix Liquid Crystal Displays (AM-LCD) are gradually replacing the Cathode Ray Tubes (CRT) in the radiology reading rooms. Results of some initial study seem to confirm the high hopes placed in LCDs. But they are still far from ideal. Like CRTs, LCDs generally possess a limited contrast resolution. On the other hand, they exhibit higher spatial noise than CRTs. These can interfere with clinical diagnosis and reduce the efficiency especially when there are subtle abnormalities presented in clinical images. The purpose of this paper is to explore ways to improve softcopy display of medical images through appropriate image processing techniques to compensate for LCD’s contrast resolution and spatial noise. Two digital dithering operations (error diffusion) are applied to treat contrast resolution and spatial noise separately. For contrast resolution compensation, the processing is done in the perceptually linear domain, whereas for spatial noise compensation, the corresponding processing is done in the display output luminance domain. Some initial results indicate that the compensation algorithms discussed in this paper indeed help to increase the performance of LCDs.
Photographic measurement of the effects of viewing angle on the luminance and contrast of liquid crystal displays
An actively cooled charge couple device detector in combination with a 4 mm focal length lens (camera) was used to evaluate the luminance and perceived contrast properties of a liquid crystal display (LCD). The circular field of view (FOV) of the camera occupied an angular range (θ) of ±42.5° from normal in all directions. Uniform field images corresponding to 17 equally spaced grayscale values in the 8 bit digital driving level (DDL) range of the display system were acquired. The 12 bit grayscale digital images produced by the camera were converted to luminance (cd/m2) units via the measured DDL vs. luminance response of the camera. The Barten model of the grayscale response of the human visual system was used to compute the perceived contrast of the display within the angular FOV of the camera and throughout the 8-bit DDL range of the display. 1D profiles were extracted from the 2D measurements and compared to measurements acquired from a similar display using a Fourier-optics-based luminance meter and published methods. The results of the two methods generally agreed to within 5%. Greater discrepancy was realized for the lowest portion of the DDL range. The photographic methods used were straightforward and resulted in accurate display assessment measurements over a FOV that is relevant for the clinical use of LCDs.
Visual detection with non-Lambertian displays: model and human observer results
Aldo Badano, Brandon D. Gallas, Dipesh H. Fifadara
Many investigators have now recognized that deviations from the on-axis grayscale presentation function in non-Lambertian displays affect the way images are presented to the human observer. However, the quantification of that effect in terms of detection performance has not yet been reported. In the past, we have described physical measurements of the off-axis changes in display luminance and contrast, and on the incorporation of such measurements into a simple mathematical transformation acting on image data that mimics the effect of off-axis viewing. In this paper, we report on the performance of model and human observers with respect to on- and off-axis viewing. The model observers used are the ideal linear observer with off-axis template knowledge and a human-like observer that incorporates quantization due to limited bit-depth and contrast sensitivity of the human visual system. Our results for diagonal viewing at 30 and 45° from the display normal in a 5-million-pixel, monochrome, in-plane-switching, dual-domain AMLCD suggest severe degradation in detection performance. A human-like model which considers the contrast sensitivity of the visual system - not the ideal linear observer - can be used to approximately map off-axis grayscale changes into detectability maps for non-Lambertian displays. This investigation contributes to the setting of viewing angle requirements for medical imaging monitors based on robust observer performance data.
Cardiac Imaging
icon_mobile_dropdown
Cardiac modeling using active appearance models and morphological operators
Bernhard Pfeifer, Friedrich Hanser, Michael Seger, et al.
We present an approach for fast reconstructing of cardiac myocardium and blood masses of a patient's heart from morphological image data, acquired either MRI or CT, in order to estimate numerically the spread of electrical excitation in the patient's atria and ventricles. The approach can be divided into two main steps. During the first step the ventricular and atrial blood masses are extracted employing Active Appearance Models (AAM). The left and right ventricular blood masses are segmented automatically after providing the positions of the apex cordis and the base of the heart. Because of the complex geometry of the atria the segmentation process of the atrial blood masses requires more information as the ventricular blood mass segmentation process of the ventricles. We divided, for this reason, the left and right atrium into three divisions of appearance. This proved sufficient for the 2D AAM model to extract the target blood masses. The base of the heart, the left upper and left lower pulmonary vein from its first up to its last appearance in the image stack, and the right upper and lower pulmonary vein have to be marked. After separating the volume data into these divisions the 2D AAM search procedure extracts the blood masses which are the main input for the second and last step in the myocardium extraction pipeline. This step uses morphologically-based operations in order to extract the ventricular and atrial myocardium either directly by detecting the myocardium in the volume block or by reconstructing the myocardium using mean model information, in case the algorithm fails to detect the myocardium.
Ultrasonic diaphragm tracking for cardiac interventional navigation on 3D motion compensated static roadmaps
Holger Timinger, Sascha Kruger, Klaus Dietmayer, et al.
In this paper, a novel approach to cardiac interventional navigation on 3D motion-compensated static roadmaps is presented. Current coronary interventions, e.g. percutaneous transluminal coronary angioplasties, are performed using 2D X-ray fluoroscopy. This comes along with well-known drawbacks like radiation exposure, use of contrast agent, and limited visualization, e.g. overlap and foreshortening, due to projection imaging. In the presented approach, the interventional device, i.e. the catheter, is tracked using an electromagnetic tracking system (MTS). Therefore, the catheters position is mapped into a static 3D image of the volume of interest (VOI) by means of an affine registration. In order to compensate for respiratory motion of the catheter with respect to the static image, a parameterized affine motion model is used which is driven by a respiratory sensor signal. This signal is derived from ultrasonic diaphragm tracking. The motion compensation for the heartbeat is done using ECG-gating. The methods are validated using a heart- and diaphragm-phantom. The mean displacement of the catheter due to the simulated organ motion decreases from approximately 9 mm to 1.3 mm. This result indicates that the proposed method is able to reconstruct the catheter position within the VOI accurately and that it can help to overcome drawbacks of current interventional procedures.
Registration of high-resolution 3D atrial images with electroanatomical cardiac mapping: evaluation of registration methodology
Yiyong Sun, Fred S. Azar, Chenyang Xu, et al.
Registration of atrial high-resolution CT and MR images with a cardiac mapping system can provide real-time electrical activation information, catheter tracking, and recording of lesion position. The cardiac mapping and navigation system comprises a miniature passive magnetic field sensor, an external ultralow magnetic field emitter (location pad), and a processing unit (CARTO, BiosenseWebster). We developed a progressive methodology for both interactively and automatically registering high-resolution 3D atrial images (MR or CT) with the corresponding electrophysiological (EP) points of 3D electro-anatomical (EA) maps. This methodology consists of four types of registration algorithms ranging from landmark-based to surface-based registration. We evaluated the methodology through phantom and patient studies. In the phantom study, we obtain a CT scan of a transparent heart phantom, and then use the CARTO system to visually pick a number of points inside the transparent phantom. After segmenting the atrium into a 3D surface, we register it to the measured EA map. The results are compared to the manual EA point measurements. In the 13-patient study, the four types of registrations are evaluated: visual alignment, landmark registration (three EA points are used), surface-based registration (all EA points are used), and local surface-based registration (a subset of the EA points is used, and one specific point is given a higher weight for a better “local registration”). Surface-based registration proves to be clearly superior to visual alignment. This new registration methodology may help in creating a novel and more visually interactive workflow for EP procedures, with more accurate EA map acquisitions. This may improve the ablation accuracy in atrial fibrillation (AFib) procedures, decrease the dependency on fluoroscopy, and also lead to less radiation delivered to the patient.
Catheter based calibration for augmented reality guidance of cardiac thermo-ablation procedures
Stijn De Buck, Frederik Maes, Joris Ector, et al.
Introducing a patient specific model in an augmented reality environment for cardiac thermo-ablation procedures, requires a calibration strategy that is sufficiently accurate and easily applicable without compromising the image quality that is conventionally expected. We present a two-step calibration method and registration strategy which can satisfy these requirements. Relying only on catheter electrode correspondences and the knowledge of the rotation between the fluoroscopes, the method retrieves both the parameters of fluoroscopic devices, including non-linear distortion effects, and the electrode positions. Registration can subsequently be performed by visually matching the pre-operative model to the fluoroscopic images inside the augmented reality framework. Simulations under real life conditions and validation on real images show an accuracy of 5 pixels which is equivalent to <2 mm in world coordinates. Validation experiments on real images and with calibration jig as gold standard show a mean reconstruction accuracy of 1,33 mm resulting and a mean image plane error of 5.48 pixels.
A C++ framework for creating tissue specific segmentation-pipelines
Bernhard Pfeifer, Friedrich Hanser, Michael Seger, et al.
For a clinical application of the inverse problem of electrocardiography, a flexible and fast generation of a patient's volume conductor model is essential. The volume conductor model includes compartments like chest, lungs, ventricles, atria and the associated blood masses. It is a challenging task to create an automatic or semi-automatic segmentation procedure for each compartment. For the extraction of the lungs, as one example, a region growing algorithm can be used, to extract the blood masses of the ventricles Active Appearance Models may succeed, and to construct the atrial myocardium a multiplicity of operations are necessary. These examples illustrate that there is no common method that will succeed for all compartments like a least common denominator. Another problem is the automatization of combining different methods and the origination of a segmentation pipeline in order to extract a compartment and, accordingly, the desired model - in our case the complete volume conductor model for estimating the spread of electrical excitation in the patient's heart. On account of this, we developed a C++ framework and a special application with the goal of creating tissue-specific segmentation pipelines. The C++ framework uses different standard frameworks like DCMTK for handling medical images (http://dicom.offis.de/dcmtk.php.en), ITK (http://www.itk.org/) for some segmentation methods, and Qt (http://www.trolltech.com/) for creating user interfaces. Our Medical Segmentation Toolkit (MST) enables to combine different segmentation techniques for each compartment. In addition, the framework enables to create user-defined compartment pipelines.
Neuro I
icon_mobile_dropdown
Virtual angiography for visualization and validation of computational fluid dynamics models of aneurysm hemodynamics
Matthew D. Ford, Gordan R. Stuhne, Hristo N. Nikolov, et al.
It has recently become possible to simulate aneurysmal blood flow dynamics in a patient-specific manner via the coupling of 3D X-ray angiography and computational fluid dynamics (CFD). Before such image-based CFD models can be used in a predictive capacity, however, it must be shown that they indeed reproduce the in vivo hemodynamic environment. Motivated by the fact that there is currently no technique for measuring complex blood velocity fields in vivo, in this paper we describe how cine X-ray angiograms may be simulated for the purpose of indirectly validating patient-specific CFD models. Mirroring the radiological procedure, a virtual angiogram is constructed by first simulating the time-varying injection of contrast agent into a previously computed patient-specific CFD model. A time-series of images is then constructed by simulating attenuation of X-rays through the simulated 3D contrast-agent flow dynamics. Virtual angiographic images and residence time maps, here derived from an image-based CFD model of a giant aneurysm, are shown to be in excellent agreement with the corresponding clinical images and maps, but only when the interaction between the quasi-steady contrast-agent injection and the pulsatile wash-out are properly accounted for. These virtual angiographic techniques therefore pave the way for validating image-based CFD models against routinely available clinical data, and also provide a means of visualizing complex, 3D blood flow dynamics in a clinically relevant manner. However, they also clearly show how the contrast-agent injection perturbs the normal blood flow dynamics, further highlighting the utility of CFD as a window into the true aneurysmal hemodynamics.
Augmented-reality-guided biopsy of a tumor near the skull base: the surgeon's experience
Georg Eggers M.D., Gunther Sudra, Sassan Ghanai, et al.
INPRES, a system for Augmented Reality has been developed in the collaborative research center "Information Technology in Medicine - Computer- and Sensor-Aided Surgery". The system is based on see-through glasses. In extensive preclinical testing the system has proven its functionality and tests with volunteers had been performed successfully, based on MRI imaging. We report the surgeons view of the first use of the system for AR guided biopsy of a tumour near the skull base. Preoperative planning was performed based on CT image data. The information to be projected was the tumour volume and was segmented from image data. With the use of infrared cameras, the positions of patient and surgeon were tracked intraoperatively and the information on the glasses displays was updated accordingly. The systems proved its functionality under OR conditions in patient care: Augmented reality information could be visualized with sufficient accuracy for the surgical task. After intraoperative calibration by the surgeon, the biopsy was acquired successfully. The advantage of see through glasses is their flexibility. A virtual stereoscopic image can be set up wherever and whenever desired. A biopsy at a delicate location could be performed without the need for wide exposure. This means additional safety and lower operation related morbidity to the patient. The integration of the calibration-procedure of the glasses into the intraoperative workflow is of importance to the surgeon.
Computer-aided placement of deep brain stimulators: from planning to intraoperative guidance
Pierre-Francois D'Haese, Srivatsan Pallavaram, Chris Kao M.D., et al.
The long term objective of our research is to develop a system that will automate as much as possible DBS implantation procedures. It is estimated that about 180,000 patients/year would benefit from DBS implantation. Yet, only 3000 procedures are performed annually. This is so because the combined expertise required to perform the procedure successfully is only available at a limited number of sites. Our goal is to transform this procedure into a procedure that can be performed by a general neurosurgeon at a community hospital. In this work we report on our current progress toward developing a system for the computer-assisted pre-operative selection of target points and for the intra-operative adjustment of these points. The system consists of a deformable atlas of optimal target points that can be used to select automatically the pre-operative target, of an electrophysiological atlas, and of an intra-operative interface. The atlas is deformed using a rigid then a non-rigid registration algorithm developed at our institution. Results we have obtained show that automatic prediction of target points is an achievable goal. Our results also indicate that electrophysiological information can be used to resolve structures not visible in anatomic images, thus improving both pre-operative and intra-operative guidance. Our intra-operative system has reached the stage of a working prototype that is clinically used at our institution.
Microangiographic image-guided localization of a new asymmetric stent for treatment of cerebral aneurysms
For treatment of cerebral aneurysms, the low porosity patch-like region of a new asymmetric stent must be accurately aligned both longitudinally and rotationally to cover the aneurysm orifice. Image guided interventions (IGI) for this task using either a high spatial resolution microangiographic detector (MA) or a standard x-ray image intensifier (XII) are compared. MA is a custom built phosphor-fiberoptic-CCD x-ray detector; the MA array is 1024X1024 with 43 microns pixels. We designed an experimental simulation of the IGI which involved localization using a combination of a computer-controlled rotational stage supported on a linear traverse. A catheter containing the asymmetric stent with special gold markers was positioned near the aneurysm of a vessel phantom which is contained in a flow loop to enable contrast injection for creation of roadmap images. We used four different configurations for the markers consisting of dots and lines. The true stent alignment, obtained by direct visual viewing, was determined to better than one degree rotational accuracy. The resultant IGI localization accuracy under radiographic control with the microangiographic detector was 4° compared to 12° for the XII. In general the line markers performed better than the dot markers. Experimental data show that high resolution detectors such as MA can vastly improve the accuracy of localization and tracking of devices such as asymmetric stents. This should enable development of more effective treatment devices and interventions. (Partial support from NIH grants NS38746, NS43294, and EB002873; UB STOR, Toshiba MSC, and Guidant Corp.)
Automated skull tracking for the CyberKnife image-guided radiosurgery system
Dongshan Fu, Gopinath Kuduvalli, Vladimir Mitrovic, et al.
We have developed an automated skull tracking method to perform near real-time patient alignment and position correction during CyberKnife image-guided intracranial radiosurgery. Digitally reconstructed radiographs (DRRs) are first generated offline from a CT study before treatment, and are used as reference images for the patient position. Two orthogonal projection X-ray images are then acquired at the time of patient alignment or treatment. Multi-phase registration is used to register the DRRs with the X-ray images. The registration in each projection is carried out independently; the results are then combined and converted to a 3-D rigid transformation. The in-plane transformation and the out-of-plane rotations are estimated using different search methods including multi-resolution matching, steepest descent minimization and one-dimensional search. Two similarity measure methods, optimized pattern intensity and sum of squared difference (SSD), are applied at different search phases to optimize both accuracy and computation speed. Experiments on an anthropomorphic skull phantom showed that the tracking accuracy (RMS error) is better than 0.3 mm for each translation and better than 0.3 degree for each rotation, and the targeting accuracy (clinically relevant accuracy) tested with the CyberKnife system is better than 1 mm. The computation time required for the tracking algorithm is within a few seconds.
Ultrasound
icon_mobile_dropdown
X-IVUS: integrated x-ray and IVUS system for the Cathlab
Percutaneous Transluminal Coronary Angioplasty is currently the preferred method for coronary artery disease treatment. Angiograms depict residual lumen, but lack information about plaque characteristics and exact geometry. During instrument positioning, intracoronary characterization at the current instrument location is desirable. By pulling back an intravascular ultrasound (IVUS) probe through a stenosis, cross-sections of the artery are acquired. These images can provide the desired characterization if they are properly registered to diagnostic angiograms or interventional fluoroscopies. The method we propose acquires fluoroscopy frames at the beginning, end, and optionally during a constant speed pullback. The IVUS probe is localized and registered to previously acquired angiograms using a compensation algorithm for heartbeat and respiration. Then, for each heart phase, the pullback path is interpolated and the corresponding IVUS frames are positioned. During the intervention the instrument is localized and registered onto the pullback path. Thus, each IVUS frame can be registered with a position on an angiogram or to an instrument location and during subsequent steps of the intervention the appropriate IVUS frames can be displayed as if an IVUS probe were present at the instrument position. The method was tested using a phantom featuring respiratory and contraction movement and an automatic pullback with constant speed. The IVUS acquisition was replaced by fibre optics and the phantom was imaged in angiographic and fluoroscopic modes. The study showed that for the phantom case it is indeed possible to register the IVUS cross-section to the interventional instrument positions to an accuracy of less than 2mm.
Quantifying brain shift during neurosurgery using spatially tracked ultrasound
Brain shift during neurosurgery currently limits the effectiveness of stereotactic guidance systems that rely on preoperative image modalities like magnetic resonance (MR). The authors propose a process for quantifying intraoperative brain shift using spatially-tracked freehand intraoperative ultrasound (iUS). First, one segments a distinct feature from the preoperative MR (tumor, ventricle, cyst, or falx) and extracts a faceted surface using the marching cubes algorithm. Planar contours are then semi-automatically segmented from two sets of iUS b-planes obtained (a) prior to the dural opening and (b) after the dural opening. These two sets of contours are reconstructed in the reference frame of the MR, composing two distinct sparsely-sampled surface descriptions of the same feature segmented from MR. Using the Iterative Closest Point (ICP) algorithm one obtains discrete estimates of the feature deformation performing point-to-surface matching. Vector subtraction of the matched points then can be used as sparse deformation data inputs for inverse biomechanical brain tissue models. The results of these simulations are then used to modify the pre-operative MR to account for intraoperative changes. The proposed process has undergone preliminary evaluations in a phantom study and was applied to data from two clinical cases. In the phantom study, the process recovered controlled deformations with an RMS error of 1.1 mm. These results also suggest that clinical accuracy would be on the order of 1-2mm. This finding is consistent with prior work by the Dartmouth Image-Guided Neurosurgery (IGNS) group. In the clinical cases, the deformations obtained were used to produce qualitatively reasonable updated guidance volumes.
Ultrasound-based navigation for minimally invasive surgical atrial fibrillation treatment: workflow and application prototype
Mark Hastenteufel, Siwei Yang, Carsten Christoph, et al.
Atrial fibrillation (AF) is the most common arrhythmia and results in an increased risk of ischemic stroke. Recently, a european consortium has developed a new minimally invasive device for surgical AF treatment. It consists of a micro-robot holding an end-effector called "umbrella" containing 22 radiofrequency powered electrodes. Surgery using this new device can only be performed having an appropriate navigation technique. Therefore, we have developed an image-based navigation workflow and a prototypic navigation application. First, a navigation workflow including an appropriate intra-operative image-modality was defined. Intraoperative ultrasound became the imaging modality of choice. Once the umbrella is unfolded inside the left atrium, data is acquired and segmented. Using a reliable communication protocol, mobility values are transferred from the control software to the navigation system. A deformation model predicts the behavior of the umbrella during repositioning. Prior to surgery, desired ablation lines can be interactively planned and actually made ablation lines are visualized during surgery. Several in-vitro tests were performed. The navigation prototype has been integrated and tested within the overall system successfully. Image acquisitions of the umbrella showed the feasibility of the navigation procedure. More in-vitro and in-vivo tests are currently performed to make the new device and the described navigation procedure ready for clinical use.
Acoustic radiation force impulse imaging for real-time observation of lesion development during radiofrequency ablation procedures
When performing radiofrequency ablation (RFA) procedures, physicians currently have little or no feedback concerning the success of the treatment until follow-up assessments are made days to weeks later. To be successful, RFA must induce a thermal lesion of sufficient volume to completely destroy a target tumor or completely isolate an aberrant cardiac pathway. Although ultrasound, computed tomography (CT), and CT-based fluoroscopy have found use in guiding RFA treatments, they are deficient in giving accurate assessments of lesion size or boundaries during procedures. As induced thermal lesion size can vary considerably from patient to patient, the current lack of real-time feedback during RFA procedures is troublesome. We have developed a technique for real-time monitoring of thermal lesion size during RFA procedures utilizing acoustic radiation force impulse (ARFI) imaging. In both ex vivo and in vivo tissues, ARFI imaging provided better thermal lesion contrast and better overall appreciation for lesion size and boundaries relative to conventional sonography. The thermal safety of ARFI imaging for use at clinically realistic depths was also verified through the use of finite element method models. As ARFI imaging is implemented entirely on a diagnostic ultrasound scanner, it is a convenient, inexpensive, and promising modality for monitoring RFA procedures in vivo.
Automated seed localization for intraoperative prostate brachytherapy based on 3D line segment patterns
Mingyue Ding, Zhouping Wei, Donal B. Downey, et al.
Transrectal ultrasound (TRUS)-guided brachytherapy is a treatment option for localized prostate cancer, in which 125I or 103Pd radioactive seeds are implanted into the prostate. In this procedure, automated seed localization is important for intra-operative evaluation of dose delivery, which permits the identification of under-dosed regions and remedial seed placement, and ensures that the entire prostate receives the prescribed dose. In this paper, we describe the development of an automated seed segmentation method for use with 3D TRUS images. It is composed of five steps: 1) 3D needle segmentation; 2) volume cropping along the detected needle; 3) non-seed structure removal based on tri-bar model projection; 4) seed candidate recognition using 3D line segment detection; and 5) localization of seed positions. Experiments with the agar and chicken phantom images demonstrated that our method could segment 93% of the seeds in the 3D TRUS images with a mean distance error of 1.0 mm in an agar phantom and 1.7 mm in a chicken phantom, both with respect to manual segmented seed positions. The false positive rate was 7% while the segmentation time on a PC computer with dual AMD Athlon 1.8GHz processor was 280 seconds.
Semi-automatic staging system for rectal cancer using spatially oriented unwrapped endorectal ultrasound
Endorectal ultrasound is currently the gold standard for the staging of rectal cancer; however, the accurate staging of the disease requires extensive training and is difficult, especially for those clinicians who do not see a large number of patients per year. Therefore, there is a need for a semi-automatic staging system to assist the clinicians in the accurate staging of rectal cancer. We believe that the unwrapping of the circular ERUS images captured by a spatially tracked ERUS system is a step in this direction. The steps by which a 2D image can be unwrapped are described thereby allowing the circular layers of the rectal wall to be displayed as flat layers stacked on top of each other. We test the unwrapping process using images from a cylindrical rectal phantom and a human rectum. The process of unwrapping endorectal ultrasound images qualitatively provides good visualization of the layers of the rectal wall and rectal tumors and supports the continual study of this novel staging system.
Visualization
icon_mobile_dropdown
Comprehensive combined visualization of anatomy and hemodynamics
Ursula Kose, Kees P. Visser, Cathy L. Tryon, et al.
In recent years, the assessment of patient-specific hemodynamic information of the cardiovascular system has become an important issue. It is believed that this information will improve the diagnosis and treatment of cardiovascular diseases. Realistic patient geometries and flow velocities acquired from image data can nowadays be used as input for computational fluid dynamics (CFD) simulations of the blood flow through the cardiovascular system. Results obtained from these simulations have to be comprehensively visualized so that the physician can understand them and draw diagnostic and/or therapeutic conclusions. The aim of the research reported in this paper is to provide methods for the combined comprehensive visualization of the anatomical information segmented from image data with the hemodynamic information acquired by CFD simulations based on these image data. Several methods are known for the visualization of the blood flow velocity, e.g. flow streamlines, particle traces or simple cut planes through the vessel with a color-coded overlay of the flow velocity. To make these flow visualizations more understandable for the physician, we have developed methods to generate combined visualizations of the simulated blood flow velocity and the patient’s anatomy segmented from the image data. First results of these methods show that the perception of CFD simulation results of blood flow is much better when it is combined with anatomical information of surrounding structures. Physicians reacted very enthusiastically during presentations of results of our new visualization methods. Results will be demonstrated at the conference.
Enhancing direct volume visualization using perceptual properties
Direct volume rendering (DVR) is a visualisation technique allowing users to create 2-D renditions from 3-D spatial datasets. This technique can assist medical users in both diagnosis and therapy planning. Currently users of such visualisation systems have limited means of selecting visualisation parameters to enhance important regions of interest (ROI). We propose a modification to 3-D texture-based volume rendering allowing users to visually enhance important regions, while retaining contextual information. Using a series of interleaved region slices, the algorithm assigns a different transfer function to the ROI and context. Knowledge about the human visual system is used to modify the two transfer functions creating "pop-out" effects. This approach is demonstrated using the perceptual characteristics of luminance and hue. The output of this research is the new ability for users to precisely control the highlighting of regions of interest and hence improve the visualisation process.
Improving the visualization of 3D ultrasound data with 3D filtering
Vijay Shamdasani, Unmin Bae, Ravi Managuli, et al.
3D ultrasound imaging is quickly gaining widespread clinical acceptance as a visualization tool that allows clinicians to obtain unique views not available with traditional 2D ultrasound imaging and an accurate understanding of patient anatomy. The ability to acquire, manipulate and interact with the 3D data in real time is an important feature of 3D ultrasound imaging. Volume rendering is often used to transform the 3D volume into 2D images for visualization. Unlike computed tomography (CT) and magnetic resonance imaging (MRI), volume rendering of 3D ultrasound data creates noisy images in which surfaces cannot be readily discerned due to speckles and low signal-to-noise ratio. The degrading effect of speckles is especially severe when gradient shading is performed to add depth cues to the image. Several researchers have reported that smoothing the pre-rendered volume with a 3D convolution kernel, such as 5x5x5, can significantly improve the image quality, but at the cost of decreased resolution. In this paper, we have analyzed the reasons for the improvement in image quality with 3D filtering and determined that the improvement is due to two effects. The filtering reduces speckles in the volume data, which leads to (1) more accurate gradient computation and better shading and (2) decreased noise during compositing. We have found that applying a moderate-size smoothing kernel (e.g., 7x7x7) to the volume data before gradient computation combined with some smoothing of the volume data (e.g., with a 3x3x3 lowpass filter) before compositing yielded images with good depth perception and no appreciable loss in resolution. Providing the clinician with the flexibility to control both of these effects (i.e., shading and compositing) independently could improve the visualization of the 3D ultrasound data. Introducing this flexibility into the ultrasound machine requires 3D filtering to be performed twice on the volume data, once before gradient computation and again before compositing. 3D filtering of an ultrasound volume containing millions of voxels requires a large amount of computation, and doing it twice decreases the number of frames that can be visualized per second. To address this, we have developed several techniques to make computation efficient. For example, we have used the moving average method to filter a 128x128x128 volume with a 3x3x3 boxcar kernel in 17 ms on a single MAP processor running at 400 MHz. The same methods reduced the computing time on a Pentium 4 running at 3 GHz from 110 ms to 62 ms. We believe that our proposed method can improve 3D ultrasound visualization without sacrificing resolution and incurring an excessive computing time.
Efficient visualization of volume data sets with region of interest and wavelets
Sebastien Piccand, Rita Noumeir, Eric Paquette
The growing volume of medical images acquired with new imaging modalities poses big challenges to the radiologist's interpretation process. Innovative image visualization techniques can play a major role in enabling efficient and accurate information presentation and navigation, by combining computational efficiency with diagnostic resolution. Efficiency and resolution, two opposing requirements, can be accomplished by focusing on full resolution regions of interest while maintaining sufficient contextual information. In fact, structures of interest typically occupy a small percentage of the data, but their analysis requires context information like locations within a specific organ or adjacency to sensitive structures. We propose a 3D visualization technique that is based on the multi-resolution property of the wavelet transform in order to display a full resolution region of interest while displaying a coarser context to achieve efficiency in rendering during the exploratory navigation phase. A full resolution context can also be rendered when needed for a specific view. In a preprocessing stage the data is decomposed with a three-dimensional wavelet transform. The interactive visualization process then uses the wavelet representation and a user-specified region to render a full resolution region of interest and a coarser context directly from the wavelet space through wavelet splatting, thus avoiding volume reconstruction. This efficient rendering approach is combined with lighting calculations, in the preprocessing stage. While greatly enhancing depth perception and objects shape, lighting does not add additional cost to the interactive visualization process, resulting in a good compromise between computational efficiency and image quality.
MRI visualization of pathological forms by suppression of normal tissue signals
Yuri A. Pirogov, Nikolai V. Anisimov, Leonid V. Gubskiy, et al.
To improve the visualization and 3D-reconstruction of some pathological formations of the brain, it is offered to use a new method of processing of MR images with suppression of signals from normal tissues. The special attention is offered to be given suppression of signals of fatty tissue, free water and partially bound water of mucous membranes. For such way realization, it is offered to lead two scans with simultaneous suppression of two normal components and to multiply the obtained images. Simultaneous suppression of signals from two normal tissues is realized with the help of pulse sequence twice using inversion-recovery effect. Delays in pulse sequence are selected in accordance with the times of longitudinal relaxation of fat, free water and partially bound water. In comparison with earlier described technique of simultaneous suppression of signals of water and fat, the new method is especially useful at research of pathological formations when the zone of defeat is placed in a zone of nose bosoms. Besides allocation of a zone of defeat, MIP reconstruction becomes simpler. The offered technique well proves at research of tumors and hemorrhages.
Neuro II
icon_mobile_dropdown
MR and CT image fusion of the cervical spine: a noninvasive alternative to CT-myelography
Yangqiu Hu, Sohail K. Mirza, Jeffrey G. Jarvik, et al.
CT-Myelography (CTM) is routinely used for planning surgery for degenerative disease of the spine, but its invasive nature, significant potential morbidity, and high costs make a noninvasive substitute desirable. We report our work on evaluating CT and MR image fusion as an alternative to CTM. Because the spine is only piecewise rigid, a multi-rigid approach to the registration of spinal CT and MR images was developed (SPIE 2004), in which the spine on CT images is first segmented into separate vertebrae, each of which is then rigidly registered with the corresponding vertebra on MR images. The results are then blended to obtain fusion images. Since they contain information from both modalities, we hypothesized that fusion images would be equivalent to CTM. To test this we selected 34 patients who had undergone MRI and CTM for degenerative disease of the cervical spine, and used the multi-rigid approach to produce fused images. A clinical vignette for each patient was created and presented along with either CT/MR fusion images or CTM images. A group of spine surgeons are asked to formulate detailed surgical plans based on each set of images, and the surgical plans are compared. A similar study assessing diagnostic agreement is being performed with neuroradiologists, who also assess the accuracy of registration. Our work to date has demonstrated the feasibility of segmentation and multi-rigid fusion in clinical cases and the acceptability of the questionnaire to physicians. Preliminary analysis of one surgeon's and one neuroradiologist’s evaluation has been performed.
Surface smoothing and template partitioning for cranial implant CAD
Employing patient-specific prefabricated implants can be an effective treatment for large cranial defects (i.e., > 25 cm2). We have previously demonstrated the use of Computer Aided Design (CAD) software that starts with the patient’s 3D head CT-scan. A template is accurately matched to the pre-detected skull defect margin. For unilateral cranial defects the template is derived from a left-to-right mirrored skull image. However, two problems arise: (1) slice edge artifacts generated during isosurface polygonalization are inherited by the final implant; and (2) partitioning (i.e., cookie-cutting) the implant surface from the mirrored skull image usually results in curvature discontinuities across the interface between the patient’s defect and the implant. To solve these problems, we introduce a novel space curve-to-surface partitioning algorithm following a ray-casting surface re-sampling and smoothing procedure. Specifically, the ray-cast re-sampling is followed by bilinear interpolation and low-pass filtering. The resulting surface has a highly regular grid-like topological structure of quadrilaterally arranged triangles. Then, we replace the regions to be partitioned with predefined sets of triangular elements thereby cutting the template surface to accurately fit the defect margin at high resolution and without surface curvature discontinuities. Comparisons of the CAD implants for five patients against the manually generated implant that the patient actually received show an average implant-patient gap of 0.45mm for the former and 2.96mm for the latter. Also, average maximum normalized curvature of interfacing surfaces was found to be smoother, 0.043, for the former than the latter, 0.097. This indicates that the CAD implants would provide a significantly better fit.
Hardware-accelerated glyph based visualization of major white matter tracts for analysis of brain tumors
F. Enders, S. Iserhardt-Bauer, P. Hastreiter, et al.
Visualizing diffusion tensor imaging data has recently gained increasing importance. The data is of particular interest for neurosurgeons since it allows analyzing the location and topology of major white matter tracts such as the pyramidal tract. Various approaches such as fractional anisotropy, fiber tracking and glyphs have been introduced but many of them suffer from ambiguous representations of important tract systems and the related anatomy. Furthermore, there is no information about the reliability of the presented visualization. However, this information is essential for neurosurgery. This work proposes a new approach of glyph visualization accelerated with consumer graphics hardware showing a maximum of information contained in the data. Especially, the probability of major white matter tracts can be assessed from the shape and the color of the glyphs. Integrating direct volume rendering of the underlying anatomy based on 3D texture mapping and a special hardware accelerated clipping strategy allows more comprehensive evaluation of important tract systems in the vicinity of a tumor and provides further valuable insights. Focusing on hardware acceleration wherever possible ensures high image quality and interactivity, which is essential for clinical application. Overall, the presented approach makes diagnosis and therapy planning based on diffusion tensor data more comprehensive and allows better assessment of major white matter tracts.
Using MPEG-7 to build a human brain image database for image-guided neurosurgery
Manjeet Rege, Ming Dong, Farshad Fotouhi, et al.
Multimedia annotation is domain specific and is assigned with the help of a domain expert to semantically enrich the data. These annotations are used for not only retrieval tasks but also to answer domain specific complex queries. To accomplish this, we propose to use MPEG-7 to annotate medical images and capture semantic information. In particular, we discuss the MPEG-7 based annotations for images of a human brain. Using MPEG-7, human brain images can be represented in an XML format. This MPEG-7 based XML file can be used to store the semantic medical information along with the low level features of the image. We also present the database design to store and query the patient images for image-guided neurosurgery.
Update of diagnostic preoperative images using low-field interventional MRI for navigation in neurosurgery: rigid-body registration
This study looks into the rigid-body registration of pre-operative anatomical high field and interventional low field magnetic resonance images (MRI). The accurate 3D registration of these modalities is required to enhance the content of interventional images with anatomical (CT, high field MRI, DTI), functional (DWI, fMRI, PWI), metabolic (PET) or angiography (CTA, MRA) pre-operative images. The specific design of the interventional MRI scanner used in the present study, a PoleStar N20, induces image artifacts, such as ellipsoidal masking and intensity inhomogeneities, which affect registration performance. On MRI data from eleven patients, who underwent resection of a brain tumor, we quantitatively evaluated the effects of artifacts in the image registration process based on a normalized mutual information (NMI) metric criterion. The results show that the quality of alignment of pre-operative anatomical and interventional images strongly depends on pre-processing carried out prior to registration. The registration results scored the highest in visual evaluation only if intensity variations and masking were considered in image registration. We conclude that the alignment of anatomical high field MRI and PoleStar interventional images is the most accurate when the PoleStar's induced image artifacts are corrected for before registration.
Cerebral vessel visualization by patient motion correction in three-dimensional CT angiography
To detect cerebral aneurysms, arterial stenosis, and other vascular anomalies in a brain CT angiography, we propose a novel technique of cerebral vessel visualization by patient motion correction. Our method has the following steps. First, a set of feature points within the skull base is selected using a 3D edge detection technique. Second, a locally weighted 3D distance map is constructed for leading our similarity measure to robust convergence on the maximum value. Third, the similarity measure between feature points is evaluated repeatedly by selective cross-correlation (SCC). Fourth, the 3D bone-vessel masking and subtraction is performed for completely removing bones. Our method has been successfully applied to five different patients datasets with intracranial aneurysms obtained from 16-slice multi-detector row CT scanner. The total processing time of each datasets was less than 20 seconds. The performance of our method was evaluated with the aspects of accuracy and robustness. For accuracy assessment, we showed results of visual inspection in two-dimensional and three-dimensional comparison of a conventional method and the proposed method. While the quality of the conventional method was substantially reduced by patient motion artifacts, our method could keep the quality of the original image. In particular, intracranial aneurysms were well visualized by our method. Experimental results show that our method is clinically promising by the fact that it is very little influenced by image degradation occurred in bone-vessel interface. For all experimental datasets, we can clearly see intracranial aneurysms as well as arteries on the volumetric images.
Poster Session
icon_mobile_dropdown
Method of simulation and visualization of FDG metabolism based on VHP image
FDG ([18F] 2-fluoro-2-deoxy-D-glucose) is the typical tracer used in clinical PET (positron emission tomography) studies. The FDG-PET is an important imaging tool for early diagnosis and treatment of malignant tumor and functional disease. The main purpose of this work is to propose a method that represents FDG metabolism in human body through the simulation and visualization of 18F distribution process dynamically based on the segmented VHP (Visible Human Project) image dataset. First, the plasma time-activity curve (PTAC) and the tissues time-activity curves (TTAC) are obtained from the previous studies and the literatures. According to the obtained PTAC and TTACs, a set of corresponding values are assigned to the segmented VHP image, Thus a set of dynamic images are derived to show the 18F distribution in the concerned tissues for the predetermined sampling schedule. Finally, the simulated FDG distribution images are visualized in 3D and 2D formats, respectively, incorporated with principal interaction functions. As compared with original PET image, our visualization result presents higher resolution because of the high resolution of VHP image data, and show the distribution process of 18F dynamically. The results of our work can be used in education and related research as well as a tool for the PET operator to design their PET experiment program.
Three-dimensional digital breast histopathology imaging
G. M. Clarke, C. Peressotti, G. E. Mawdsley, et al.
We have developed a digital histology imaging system that has the potential to improve the accuracy of surgical margin assessment in the treatment of breast cancer by providing finer sampling and 3D visualization. The system is capable of producing a 3D representation of histopathology from an entire lumpectomy specimen. We acquire digital photomicrographs of a stack of large (120 x 170 mm) histology slides cut serially through the entire specimen. The images are then registered and displayed in 2D and 3D. This approach dramatically improves sampling and can improve visualization of tissue structures compared to current, small-format histology. The system consists of a brightfield microscope, adapted with a freeze-frame digital video camera and a large, motorized translation stage. The image of each slide is acquired as a mosaic of adjacent tiles, each tile representing one field-of-view of the microscope, and the mosaic is assembled into a seamless composite image. The assembly is done by a program developed to build image sets at six different levels within a multiresolution pyramid. A database-linked viewing program has been created to efficiently register and display the animated stack of images, which occupies about 80 GB of disk space per lumpectomy at full resolution, on a high-resolution (3840 x 2400 pixels) colour monitor. The scanning or tiling approach to digitization is inherently susceptible to two artefacts which disrupt the composite image, and which impose more stringent requirements on system performance. Although non-uniform illumination across any one isolated tile may not be discernible, the eye readily detects this non-uniformity when the entire assembly of tiles is viewed. The pattern is caused by deficiencies in optical alignment, spectrum of the light source, or camera corrections. The imaging task requires that features as small as 3.2 &mu;m in extent be seamlessly preserved. However, inadequate accuracy in positioning of the translation stage produces visible discontinuities between adjacent features. Both of these effects can distract the viewer from the perception of diagnostically important features. Here we describe the system design and discuss methods for the correction of these artefacts. In addition, we outline our approach to rendering the processing and display of these large images computationally feasible.
Visualization of confocal microscopic biomolecular data
Zhanping Liu, Robert J. Moorhead II
Biomolecular visualization facilitates insightful interpretation of molecular structures and complex mechanisms underlying bio-chemical processes. Effective visualization techniques are required to deal with confocal microscopic biomolecular data in which intricate structures, fine features, and obscure patterns might be overlooked without sophisticated data processing and image synthesis. This paper presents major challenges in visualizing confocal microscopic biomolecular data, followed by a survey of related work. We then introduce a case study conducted to investigate the interaction between two proteins contained in a budding yeast saccharomyces cerevisiae by embedding custom modules in Amira. The multi-channel confocal microscopic volume data was first processed using an exponential operator to correct z-drop artifacts introduced during data acquisition. Channel correlation was then exploited to extract the overlap between the proteins as a new channel to represent the interaction while a statistical method was employed to compute the intensity of interaction to locate hot spots. To take advantage of crisp surface representation of region boundaries by iso-surfaces and visually pleasing translucent delineation of dense volumes by volume rendering, we adopted hybrid rendering that incorporates these two methods to display clear-cut protein boundaries, amorphous interior materials, and the scattered interaction in the same view volume with suppressed and highlighted parts selected by the user. The highlighted overlap helped biologists learn where the interaction happens and how it spreads, particularly when the volume was investigated in an immersive Cave Automatic Virtual Environment (CAVE) for intuitive comprehension of the data.
Effect of grayscale resolution on the performance of lung nodule detection on a softcopy display
A four-alternative forced-choice experiment was carried out to examine the effect of 8-bit versus 10-bit grayscale resolution on the detection of subtle lung nodules on a medical grayscale liquid crystal display (LCD). Sets of four independent backgrounds from each of three regions were derived from a very low-noise X-ray acquisition of a chest-phantom with an amorphous selenium radiographic detector. Simulated nodules of fixed diameter (10 mm) and varying contrast were digitally added to the centers of selected background images. Subsequently, multifrequency image processing was performed to enhance the image structures, followed by a tonescaling procedure that resulted in pixel values being specified as p-values, according to DICOM Part 14: The Grayscale Display Function. To investigate the effect that grayscale resolution may have upon softcopy detectability, each set of four images in the experiment was quantized to both 8-bit and 10-bit resolution. The resulting images were displayed on a DICOM-calibrated LCD display supporting up to 10 bits of grayscale input. Twenty observers with imaging expertise performed the nodule detection task for which the signal and location were known exactly. Results from all readers, chest regions, and backgrounds were pooled, and statistical significance between fractions of correct responses between 8-bit and 10-bit resolution was tested. Experimental results do not demonstrate a statistically significant difference in the fraction of correct answers between these two input grayscale resolutions.
Optical mammographer with single channel detection
In this paper, we presents a newly developed near-infrared optical tissue imaging system with single channel detection based on the principles of frequency-domain spectroscopy, which uses diffusive photons to detect the breast cancer. The patient’s breast is slightly compressed between two parallel glass plates, which are located between the source fiber and the detector fiber. The laser beam travels in the source fiber to the breast, and the transmitted light is detected by a photomultiplier tube and then demodulated. The ac amplitude of the signal is sampled to the computer by an A/D board. The source fiber and the detector fiber are driven by stepper motors and move synchronously in two dimensions, which enable the fibers to scan the entire breast. The scanning process is automatically controlled by computer. And the optical mammograms are displayed on the computer screen after the scanning process. In comparison with our former instrument that uses multichannel and scans only in one dimension to shorten the time of scanning, the new prototype has only one transmitter and one detector. This structure not only reduces the costs of the apparatus but also leads to a much more simplified system. Unfortunately, it makes the scanning time much longer. However, a new sampling mode is developed for the system to sample the data continuously, which compensates the disadvantage of the single-channel structure and reduces the scanning time. The results of intralipid experiments and pre-clinical experiments prove the potential of this approach to distinguish between tumors and healthy tissues.
Evaluation and validation methods for intersubject nonrigid 3D image registration of the human brain
Ting Guo, Yves P. Starreveld, Terry M. Peters
This work presents methodologies for assessing the accuracy of non-rigid intersubject registration algorithms from both qualitative and quantitative perspectives. The first method was based on a set of 43 anatomical landmarks. MRI brain images of 12 subjects were non-rigidly registered to the standard MRI dataset. The “gold-standard” coordinates of the 43 landmarks in the target were estimated by averaging their coordinates after 6 tagging sessions. The Euclidean distance between each landmark of a subject after warping to the reference space and the homologous “gold-standard” landmark on the reference image was considered as the registration error. Another method based on visual inspection software displaying the spatial change of colour-coded spheres, before and after warping, was also developed to evaluate the performance of the non-rigid warping algorithms within the homogeneous regions in the deep-brain. Our methods were exemplified by assessing and comparing the accuracy of two intersubject non-rigid registration approaches, AtamaiWarp and ANIMAL algorithms. From the first method, the average registration error was 1.04mm +/- 0.65mm for AtamaiWarp, and 1.59mm +/- 1.47mm for ANIMAL. With maximum registration errors of 2.78mm and 3.90mm respectively, AtamaiWarp and ANIMAL located 58% and 35% landmarks respectively with registration errors less than 1mm. A paired t-test showed that the differences in registration error between AtamaiWarp and ANIMAL were significant (P < 0.002) demonstrating that AtamaiWarp, in addition to being over 60 times faster than ANIMAL, also provides more accurate results. From the second method, both algorithms treated the interior of homogeneous regions in an appropriate manner.
Adaptive finite element technique for cutting in surgical simulation
Pre-computed finite element methods are valuable because of their extreme speed and high accuracy for soft tissue modeling, but they are not suitable for surgical incision simulation. In this paper we present an adaptive algorithm for finite element computation based on a preprocessing approach. It inverts the global stiffness matrix in a pre-computing stage and then simulates each cutting step by updating two lists of basic components iteratively with some localization techniques. This method allows a fast and physically accurate simulation of incision procedures.
Quality evaluation in medical visualization: some issues and a taxonomy of methods
Among the several medical imaging stages (acquisition, reconstruction, etc.), visualization is the latest stage on which decision is generally taken. Scientific visualization tools allow to process complex data into a graphical visible and understandable form, the goal being to provide new insight. If the evaluation of procedures is a crucial issue and a main concern in medicine, paradoxically visualization techniques, predominantly in tri-dimensional imaging, have not been the subject of many evaluation studies. This is perhaps due to the fact that the visualization process integrates the Human Visual and Cognitive Systems, which makes evaluation especially difficult. However, as in medical imaging, the question of quality evaluation of a specific visualization remains a main challenge. While a few studies concerning specific cases have already been published, there is still a great need for definition and systemization of evaluation methodologies. The goal of our study is to propose such a framework, which makes it possible to take into account all the parameters taking part in the evaluation of a visualization technique. Concerning the problem of quality evaluation in data visualization in general, and in medical data visualization in particular, three different concepts appear to be fundamental: the type and level of components used to convey to the user the information contained in the data, the type and level at which evaluation can be performed, and the methodologies used to perform such evaluation. We propose a taxonomy involving types of methods that can be used to perform evaluation at different levels.
Interactive pre-integrated volume rendering of medical datasets
The pre-integrated volume rendering which produces high-quality images with less sampling has become one of the most efficient and important techniques in volume rendering field. In this paper, we propose an acceleration technique of pre-integrated rendering of dynamically classified volumes. Using the overlapped-min-max block, empty space skipping of ray casting can be applied in pre-integrated volume rendering. In addition, a new pre-integrated lookup table brings much fast rendering of high-precision data without degrading image quality. We have implemented our approaches not only on the consumer graphics hardware but also on CPU, and show the performance gains using several medical data sets.
Hardware-accelerated multimodality volume fusion
Helen Hong D.D.S., Juhee Bae, Heewon Kye, et al.
In this paper, we propose a novel technique of multimodality volume fusion using a graphics hardware. Our 3D texture based volume fusion algorithm consists of three steps: First, two volumes of different modalities are loaded into the texture memory in the GPU. Second, textured slices of two volumes along the same proxy geometry are combined with various compositing functions. Third, all the composited slices are alpha blended. We have implemented our algorithm using HLSL (High Level Shader Language). Our method shows that the exact depth of each volume and the realistic views with interactive rate in comparison with the software-based image integration. Experimental results using MR and PET brain images and the angiography with a stent show that over composting operation is more useful for clinical application.
Geometric modeling of space-optimal unit-cell-based tissue engineering scaffolds
Srinivasan Rajagopalan, Lichun Lu, Michael J. Yaszemski, et al.
Tissue engineering involves regenerating damaged or malfunctioning organs using cells, biomolecules, and synthetic or natural scaffolds. Based on their intended roles, scaffolds can be injected as space-fillers or be preformed and implanted to provide mechanical support. Preformed scaffolds are biomimetic "trellis-like" structures which, on implantation and integration, act as tissue/organ surrogates. Customized, computer controlled, and reproducible preformed scaffolds can be fabricated using Computer Aided Design (CAD) techniques and rapid prototyping devices. A curved, monolithic construct with minimal surface area constitutes an efficient substrate geometry that promotes cell attachment, migration and proliferation. However, current CAD approaches do not provide such a biomorphic construct. We address this critical issue by presenting one of the very first physical realizations of minimal surfaces towards the construction of efficient unit-cell based tissue engineering scaffolds. Mask programmability, and optimal packing density of triply periodic minimal surfaces are used to construct the optimal pore geometry. Budgeted polygonization, and progressive minimal surface refinement facilitate the machinability of these surfaces. The efficient stress distributions, as deduced from the Finite Element simulations, favor the use of these scaffolds for orthopedic applications.
Accuracy assessment and implementation of an electromagnetically tracked endoscopic orbital navigation system
Optic neuropathies are historically difficult to treat due to difficulty in reaching the optic nerve in the tight neurovascular environment of the orbit. An orbital endoscopic system is currently under development that may be able to administer treatment to the optic nerve in a significantly less invasive manner than that of previous surgical procedures. However, due the tight confines of the orbital environment, this endoscopic system has proven to be time consuming and tedious. By combining an orbital endoscope with a flexible electromagnetic tracking system, it may be possible to develop a quick and accurate method of locating the optic nerve. Much of the guesswork involved in orbital endoscopy could be relieved by live tracking in preoperative CT scans. This project focuses on the accuracy assessment of such a tracked endoscopic system as well as its implementation in phantom and animal models. With the combined benefits of orbital endoscopy and electromagnetic tracking, this is the route to providing the best support possible for safe and efficient orbital surgery.
Registration and motion compensation of a needle placement robot for CT-guided spinal procedures
Sheng Xu, Kevin R. Cleary, Dan Stoianovici, et al.
Computed tomography (CT) guided needle placement is an established practice in the medical field. The efficacy of these procedures is related to the accuracy of needle placement. Current free-hand techniques have limitations in accuracy, which is often affected by the patient motion. In response to these problems and as a testbed for future developments, we propose a robotically assisted needle placement system consisting of a mobile CT scanner, a needle insertion robot, and an optical localizer. This paper presents the overall system concept and concentrates on the system registration and compensation of the patient motion. Accuracy results using an abdominal phantom are also presented.
Advanced PET/CT fusion workstation for oncology imaging
Cancer management using positron emission tomography (PET) imaging is rapidly expanding its role in clinical practice. The high sensitivity of PET to locate cancer can be confounded by the minimal anatomical information it provides. Additional anatomical information would greatly benefit diagnosis, staging, therapy planning and treatment monitoring. Computed tomography (CT) provides detailed anatomical information but is less sensitive towards cancer localization than PET. Combining PET and CT images would enable accurate localization of the functional information with respect to detailed patient anatomy. We have developed a software platform to facilitate efficient visualization of PET/CT image studies. We used a deformable registration algorithm using mutual information and a B-spline model of the deformation. Several useful visualization modes were implemented with an efficient and robust method for switching between modes and handling large datasets. Processing of several studies can be queued and the results browsed. The software has been validated with clinical data.
Visualization of large medical data sets using memory-optimized CPU and GPU algorithms
Gundolf Kiefer, Helko Lehmann, Juergen Weese
With the evolution of medical scanners towards higher spatial resolutions, the sizes of image data sets are increasing rapidly. To profit from the higher resolution in medical applications such as 3D-angiography for a more efficient and precise diagnosis, high-performance visualization is essential. However, to make sure that the performance of a volume rendering algorithm scales with the performance of future computer architectures, technology trends need to be considered. The design of such scalable volume rendering algorithms remains challenging. One of the major trends in the development of computer architectures is the wider use of cache memory hierarchies to bridge the growing gap between the faster evolving processing power and the slower evolving memory access speed. In this paper we propose ways to exploit the standard PC’s cache memories supporting the main processors (CPU’s) and the graphics hardware (graphics processing unit, GPU), respectively, for computing Maximum Intensity Projections (MIPs). To this end, we describe a generic and flexible way to improve the cache efficiency of software ray casting algorithms and show by means of cache simulations, that it enables cache miss rates close to the theoretical optimum. For GPU-based rendering we propose a similar, brick-based technique to optimize the utilization of onboard caches and the transfer of data to the GPU on-board memory. All algorithms produce images of identical quality, which enables us to compare the performance of their implementations in a fair way without eventually trading quality for speed. Our comparison indicates that the proposed methods perform superior, in particular for large data sets.
Development of computer-assisted system for setting the appropriate margin of the organ with respiratory movement in radiation therapy planning
Rie Tanaka, Shigeru Sanada, Takeshi Kobayashi, et al.
External beam radiation therapy is becoming more common due to the increase in cancer rate because of the aging of society and respect for quality of life. Accurate radiation therapy planning is crucial to prevent cancer recurrence and damage to normal tissues. However, the margin for respiratory movement is commonly set based on the operators’ experience, and sometimes lacks reproducibility and is not quantitative. The present study was performed to develop a computer-assisted system for setting appropriate margins of organs with respiratory movement in radiation therapy planning. Frontal and lateral chest fluoroscopic images (43×43cm) were obtained during respiration using a dynamic flat-panel detector system. Computer tomography (CT) images were obtained for radiation therapy planning, and digital reconstructed radiographs (DRRs) were created. The respiratory level of CT images was determined by measuring the distance from the lung apex to the diaphragm in DRR, and then one of the fluoroscopic images in the same respiratory level was determined. The thoracic vertebrae were automatically determined as landmarks, and then image registration between DRRs and fluoroscopic images was performed by template-matching. The range of respiratory movement of the target area in the lung was then measured in fluoroscopic images. The quantified range of respiratory movement was then correlated to CT data, and the appropriate margin for respiratory movement was displayed as 3D volume data in the lung. Our system could provide accurate margins for respiratory movement based on the quantified range of movement of the target in the lung during respiration.
Three-dimensional human computer interaction based on 3D widgets for medical data visualization
Three-dimensional human computer interaction plays an important role in 3-dimensional visualization. It is important for clinicians to accurately use and easily handle the result of medical data visualization in order to assist diagnosis and surgery simulation. A 3D human computer interaction software platform based on 3D widgets has been designed in traditional object-oriented fashion with some common design patterns and implemented by using ANSI C++, including all function modules and some practical widgets. A group of application examples are exhibited as well. The ultimate objective is to provide a flexible, reliable and extensible 3-D interaction platform for medical image processing and analyzing.
Super-bright LCD display for radiology; effect on search performance
H. Matthieu Visser, N. Fisekovic, R. Nalliah, et al.
It has been well established that X-ray films are best read at high peak brightness (2000-4000 nits), yet current LCD and CRT displays used in radiology have peak brightness of only 500-700 nit typically. We have developed super-bright LCD displays that for the first time approach light box brightness levels while maintaining good viewing angle characteristics and uniformity. We provide characterization of a new monochrome model with 2000 nit peak brightness and a new color model with 500 nit peak brightness. To investigate the effect of the increased brightness on search performance, a small observer study was performed. Eight radiologists and residents were asked to search for low-contrast artefacts (15 mm ovals) superimposed on a mammogram. Four different LCD displays were used, with peak brightnesses from 200 to 2000 nit. For low-contrast artefacts, search performance was markedly improved at the highest brightness.
A novel and stable approach to anatomical structure morphing for enhanced intraoperative 3D visualization
Kumar T. Rajamani, Miguel A. Gonzalez Ballester, Lutz-Peter Nolte, et al.
The use of three dimensional models in planning and navigating computer assisted surgeries is now well established. These models provide intuitive visualization to the surgeons contributing to significantly better surgical outcomes. Models obtained from specifically acquired CT scans have the disadvantage that they induce high radiation dose to the patient. In this paper we propose a novel and stable method to construct a patient-specific model that provides an appropriate intra-operative 3D visualization without the need for a pre or intra-operative imaging. Patient specific data consists of digitized landmarks and surface points that are obtained intra-operatively. The 3D model is reconstructed by fitting a statistical deformable model to the minimal sparse digitized data. The statistical model is constructed using Principal Component Analysis from training objects. Our morphing scheme efficiently and accurately computes a Mahalanobis distance weighted least square fit of the deformable model to the 3D data model by solving a linear equation system. Relaxing the Mahalanobis distance term as additional points are incorporated enables our method to handle small and large sets of digitized points efficiently. Our novel incorporation of M-estimator based weighting of the digitized points enables us to effectively reject outliers and compute stable models. Normalization of the input model data and the digitized points makes our method size invariant and hence applicable directly to any anatomical shape. The method also allows incorporation of non-spatial data such as patient height and weight. The predominant applications are hip and knee surgeries.
Design and automatic calibration of a head mounted operating binocular for augmented reality applications in computer-aided surgery
Michael Figl, Christopher Ede, Wolfgang Birkfellner, et al.
In the last years we developed and tested a head mounted display (HMD) for augmented reality applications in computer aided surgery. This HMD was developed by adapting the Varioscope AF3 (Life Optics, Vienna), an operating binocular with variable zoom and focus. One of the drawbacks of the AF3 was the missing possibility to set the zoom and focus values automatically via a machine usable interface, necessary for automatic calibration of the device. The paper presents the successor of the Varioscope AF3, the Varioscope M5 adapted for augmented reality by our lab. This device has an interface for machine controlled setting of the zoom and focus lens groups via RS 232. This enabled us to develop an automated calibration using a calibration grid mounted on a linear positioner. The position of the grid was controlled using a stepping motor controller connected via IEEE 488. The calibration grid was equipped with automatically detectable fiducial points using varying cross values of consecutive points. The resulting point pairs were used for a camera calibration with Tsai's algorithm. Tracker probes (Traxtal, Toronto) were mounted on the HMD and onto the calibration grid to derive the transformation from the coordinate system of the HMD into the system of the displays. The error of this calibrations was measured comparing the position of the tip of a bayonet probe calculated by the algorithm and found in the image of a camera mounted at the eyepiece of the device. Averaging 16 positions of the probe this deviation was found to be 0.97 ± 0.22 mm.
Aorta cross-section calculation and 3D visualization from CT or MRT data using VRML
Guenther Grabner, Robert Modritsch, Wolfgang Stiegmaier, et al.
Quantification of vessel diameters of artherosclerotic or congenital stenosis is very important for the diagnosis of vascular diseases. The aorta extraction and cross-section calculation is a software-based application that offers a three-dimensional, platform-independent, colorized visualization of the extracted aorta with augmented reality information of MRT or CT datasets. This project is based on different types of specialized image processing algorithms, dynamical particle filtering and complex mathematical equations. From this three-dimensional model a calculation of minimal cross sections is performed. In user specified distances, the aorta is cut in differently defined directions which are created through vectors with varying length. The extracted aorta and the derived minimal cross-sections are then rendered with the marching cube algorithm and represented together in a three-dimensional virtual reality with a very high degree of immersion. The aim of this study was to develop an imaging software that delivers cardiologists the possibility of (i) furnishing fast vascular diagnosis, (ii) getting precise diameter information, (iii) being able to process exact, local stenosis detection (iv) having permanent data storing and easy access to former datasets, and (v) reliable documentation of results in form of tables and graphical printouts.
Image-guided simulation for bioluminescence tomographic imaging
Noninvasive imaging of the reporter gene expression based on bioluminescence is playing an important role in the areas of cancer biology, cell biology, and gene therapy. The central problem for the bioluminescence tomography (BLT) we are developing is to reconstruct the underlying bioluminescent source distribution in a small animal using a modality fusion approach. To solve this inversion problem, a mathematical model of the mouse is built from a CT/micro-CT scan, which enables the assignment of optical parameters to various regions in the model. This optical geometrical model is used in the Monte Carlo simulation to calculate the flux distribution on the animal body surface, as a key part of the BLT process. The model development necessitates approximations in surface simplification, and so on. It leads to the model mismatches of different kinds. To overcome such discrepancies, instead of developing a mathematical model, segmented CT images are directly used in our simulation software. While the simulation code is executed, those images that are relevant are assessed according to the location of the propagating photon. Depending upon the segmentation rules including the pixel value range, appropriate optical parameters are selected for statistical sampling of the free path and weight of the photon. In this paper, we report luminescence experiments using a physical mouse phantom to evaluate this image-guided simulation procedure, which suggest both the feasibility and some advantages of this technique over the existing methods.
Technical experience from clinical studies with INPRES and a concept for a miniature augmented reality system
Gunther Sudra, Ruediger Marmulla, Tobias Salb, et al.
This paper is going to present a summary of our technical experience with the INPRES System -- an augmented reality system based upon a tracked see-through head-mounted display. With INPRES a complete augmented reality solution has been developed that has crucial advantages when compared with previous navigation systems. Using these techniques the surgeon does not need to turn his head from the patient to the computer monitor and vice versa. The system's purpose is to display virtual objects, e.g. cutting trajectories, tumours and risk-areas from computer-based surgical planning systems directly in the surgical site. The INPRES system was evaluated in several patient experiments in craniofacial surgery at the Department of Oral and Maxillofacial Surgery/University of Heidelberg. We will discuss the technical advantages as well as the limitations of INPRES and present two strategies as a result. On the one hand we will improve the existing and successful INPRES system with new hardware and a new calibration method to compensate for the stated disadvantage. On the other hand we will focus on miniaturized augmented reality systems and present a new concept based on fibre optics. This new system should be easily adaptable at surgical instruments and capable of projecting small structures. It consists of a source of light, a miniature TFT display, a fibre optic cable and a tool grip. Compared to established projection systems it has the capability of projecting into areas that are only accessible by a narrow path. No wide surgical exposure of the region is necessary for the use of augmented reality.
Geometrical modeling using multiregional marching tetrahedra for bioluminescence tomography
Alexander Cong, Yi Liu, D. Kumar, et al.
Localization and quantification of the light sources generated by the expression of bioluminescent reporter genes is an important task in bioluminescent imaging of small animals, especially the generically engineered mice. To employ the Monte Carlo method for the light-source identification, the surfaces that define the anatomic structures of the small experimental animal is required; to perform finite element-based reconstruction computation, the volumetric mesh is a must. In this work, we proposed a Multiregional Marching Tetrahedra (MMT) method for extracting the surface and volumetric meshes from segmented CT/micro-CT (or MRI) image volume of a small experimental animal. The novel MMT method extracts triangular surface mesh and constructs tetrahedra/prisms volumetric finite element mesh for all anatomic components, including heart, liver, lung, bones etc., within one sweep over all the segmented CT slices. In comparison with the well-established Marching Tetrahedra (MT) algorithm, our MMT method takes into consideration of two more surface extraction cases within each tetrahedron, and guarantees seamless connection between anatomical components. The surface mesh is then smoothed and simplified, without losing the seamless connections. The MMT method is further enhanced to generate volumetric finite-element mesh to fill the space of each anatomical component. The mesh can then be used for finite element-based inverse computation to identify the light sources.
Adapted morphing model for 3D volume reconstruction applied to abdominal CT images
The purpose of this study was to develop a 3D volume reconstruction model for volume rendering and apply this model to abdominal CT data. The model development includes two steps: (1) interpolation of given data for a complete 3D model, and (2) visualization. First, CT slices are interpolated using a special morphing algorithm. The main idea of this algorithm is to take a region from one CT slice and locate its most probable correspondence in the adjacent CT slice. The algorithm determines the transformation function of the region in between two adjacent CT slices and interpolates the data accordingly. The most probable correspondence of a region is obtained using correlation analysis between the given region and regions of the adjacent CT slice. By applying this technique recursively, taking progressively smaller subregions within a region, a high quality and accuracy interpolation is obtained. The main advantages of this morphing algorithm are 1) its applicability not only to parallel planes like CT slices but also to general configurations of planes in 3D space, and 2) its fully automated nature as it does not require control points to be specified by a user compared to most morphing techniques. Subsequently, to visualize data, a specialized volume rendering card (TeraRecon VolumePro 1000) was used. To represent data in 3D space, special software was developed to convert interpolated CT slices to 3D objects compatible with the VolumePro card. Visual comparison between the proposed model and linear interpolation clearly demonstrates the superiority of the proposed model.
Robust prediction of three-dimensional spinal curve from back surface for non-invasive follow-up of scoliosis
Charles Bergeron, Hubert Labelle M.D., Janet Ronsky, et al.
Spinal curvature progression in scoliosis patients is monitored from X-rays, and this serial exposure to harmful radiation increases the incidence of developing cancer. With the aim of reducing the invasiveness of follow-up, this study seeks to relate the three-dimensional external surface to the internal geometry, having assumed that that the physiological links between these are sufficiently regular across patients. A database was used of 194 quasi-simultaneous acquisitions of two X-rays and a 3D laser scan of the entire trunk. Data was processed to sets of datapoints representing the trunk surface and spinal curve. Functional data analyses were performed using generalized Fourier series using a Haar basis and functional minimum noise fractions. The resulting coefficients became inputs and outputs, respectively, to an array of support vector regression (SVR) machines. SVR parameters were set based on theoretical results, and cross-validation increased confidence in the system's performance. Predicted lateral and frontal views of the spinal curve from the back surface demonstrated average L2-errors of 6.13 and 4.38 millimetres, respectively, across the test set; these compared favourably with measurement error in data. This constitutes a first robust prediction of the 3D spinal curve from external data using learning techniques.
Computer aided diagnosis and treatment planning for developmental dysplasia of the hip
The developmental dysplasia of the hip (DDH) is a congenital malformation affecting the proximal femurs and acetabulum that are subluxatable, dislocatable, and dislocated. Early diagnosis and treatment is important because failure to diagnose and improper treatment can result in significant morbidity. In this paper, we designed and implemented a computer aided system for the diagnosis and treatment planning of this disease. With the design, the patient received CT (computed tomography) or MRI (magnetic resonance imaging) scan first. A mixture-based PV partial-volume algorithm was applied to perform bone segmentation on CT image, followed by three-dimensional (3D) reconstruction and display of the segmented image, demonstrating the special relationship between the acetabulum and femurs for visual judgment. Several standard procedures, such as Salter procedure, Pemberton procedure and Femoral Shortening osteotomy, were simulated on the screen to rehearse a virtual treatment plan. Quantitative measurement of Acetabular Index (AI) and Femoral Neck Anteversion (FNA) were performed on the 3D image for evaluation of DDH and treatment plans. PC graphics-card GPU architecture was exploited to accelerate the 3D rendering and geometric manipulation. The prototype system was implemented on PC/Windows environment and is currently under clinical trial on patient datasets.
Cone-beam CT with a flat-panel detector on a mobile C-arm: preclinical investigation in image-guided surgery of the head and neck
J. H. Siewerdsen, Y. Chan M.D., M. A. Rafferty M.D., et al.
A promising imaging platform for combined low-dose fluoroscopy and cone-beam CT (CBCT) guidance of interventional procedures has been developed in our laboratory. Based on a mobile isocentric C-arm (Siemens PowerMobil) incorporating a high-performance flat-panel detector (Varian PaxScan 4030CB), the system demonstrates sub-mm 3D spatial resolution and soft-tissue visibility with field of view sufficient for head and body sites. For pre-clinical studies in head neck tumor surgery, we hypothesize that the 3D intraoperative information provided by CBCT permits precise, aggressive techniques with improved avoidance of critical structures. The objectives include: 1) quantify improvement in surgical performance achieved with CBCT guidance compared to open and endoscopic techniques; and 2) investigate specific, challenging surgical tasks under CBCT guidance. Investigations proceed from an idealized phantom model to cadaveric specimens. A novel surgical performance evaluation method based on statistical decision theory is applied to excision and avoidance tasks. Analogous to receiver operating characteristic (ROC) analysis in medical imaging, the method quantifies surgical performance in terms of Lesion-Excised (True-Positve), Lesion-Remaining (False-Negative), Normal-Excised (False-Positive), and Normal-Remaining (True-Negative) fractions. Conservative and aggressive excision and avoidance tasks are executed in 12 cadaveric specimens with and without CBCT guidance, including: dissection through dura, preservation of posterior lamina, ethmoid air cells removal, exposure of peri-orbita, and excision of infiltrated bone in the skull base (clivus). Intraoperative CBCT data was found to dramatically improve surgical performance and confidence in the execution of such tasks. Pre-clinical investigation of this platform in head and neck surgery, as well as spinal, trauma, biopsy, and other nonvascular procedures, is discussed.
A robust fluoroscope tracking (FTRAC) fiducial
Ameet Kumar Jain, Tabish Mustufa, Yu Zhou, et al.
Purpose: C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct 3D information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the X-ray image, in 3D space. Optical/magnetic trackers are prohibitively expensive, intrusive and cumbersome. Method: We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of points, lines, and ellipses. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A non-linear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3 x 3 x 5 cm), it need not be close to the anatomy of interest and can be segmented automatically. Results: We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery had an error of 0.56 mm in translation and 0.33° in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. Conclusion: The method offers accuracies similar to commercial tracking systems, and is sufficiently robust for intra-operative quantitative C-arm fluoroscopy.
Matching and reconstruction of brachytherapy seeds using the Hungarian algorithm (MARSHAL)
Ameet Kumar Jain, Yu Zhou, Tabish Mustufa, et al.
Purpose: Intraoperative dosimetric quality assurance in prostate brachytherapy critically depends on discerning the 3D locations of implanted seeds. The ability to reconstruct the implanted seeds intraoperatively will allow us to make immediate provisions for dosimetric deviations from the optimal implant plan. A method for seed reconstruction from segmented C-arm fluoroscopy images is proposed. Method: The 3D coordinates of the implanted seeds can be calculated upon resolving the correspondence of seeds in multiple X-ray images. We formalize seed-matching as a network flow problem, which has salient features: (a) extensively studied exact solutions, (b) performance claims on the space-time complexity, (c) optimality bounds on the final solution. A fast implementation is realized using the Hungarian algorithm. Results: We prove that two images can correctly match only about 67% of the seeds, and that a third image renders the matching problem to be of non-polynomial complexity. We utilize the special structure of the problem and propose a pseudo-polynomial time algorithm. Using three images, MARSHAL achieved 100% matching in simulation experiments; and 98.5% in phantom experiments. 3D reconstruction error for correctly matched seeds has a mean of 0:63 mm, and 0:91 mm for incorrectly matched seeds. Conclusion: Both on synthetic data and in phantom experiments, matching rate and reconstruction accuracy were found to be sufficient for prostate brachytherapy. The algorithm is extendable to deal with arbitrary number of images without loss in speed or accuracy. The algorithm is sufficiently generic to be used for establishing correspondences across any choice of features in different imaging modalities.
Ultrasound-based technique for intrathoracic surgical guidance
Xishi Huang, Nicholas A. Hill, Terry M. Peters
Image-guided procedures within the thoracic cavity require accurate registration of a pre-operative virtual model to the patient. Currently, surface landmarks are used for thoracic cavity registration; however, this approach is unreliable due to skin movement relative to the ribs. An alternative method for providing surgeons with image feedback in the operating room is to integrate images acquired during surgery with images acquired pre-operatively. This integration process is required to be automatic, fast, accurate and robust; however inter-modal image registration is difficult due to the lack of a direct relationship between the intensities of the two image sets. To address this problem, Computed Tomography (CT) was used to acquire pre-operative images and Ultrasound (US) was used to acquire peri-operative images. Since bone has a high electron density and is highly echogenic, the rib cage is visualized as a bright white boundary in both datasets. The proposed approach utilizes the ribs as the basis for an intensity-based registration method -- mutual information. We validated this approach using a thorax phantom. Validation results demonstrate that this approach is accurate and shows little variation between operators. The fiducial registration error, the registration error between the US and CT images, was < 1.5mm. We propose this registration method as a basis for precise tracking of minimally invasive thoracic procedures. This method will permit the planning and guidance of image-guided minimally invasive procedures for the lungs, as well as for both catheter-based and direct trans-mural interventions within the beating heart.
Curved reformations using the medical imaging interaction toolkit (MITK)
Modern systems for visualization, image guided procedures and display allow not only one type of visualization, but a variety of different visualization options. Only a combination of two-dimensional image display and three-dimensional rendering provides enough information for many tasks. Multiplanar orthogonal and oblique reformations of image data are standard features of medical imaging software packages today. Additionally, curved reformations are useful. For example, diagnosis of stenotic vessels can be supported by curved reformations along the centerline of the vessel, showing the complete vessel in one two-dimensional view. In this paper, we present how the open-source Medical Imaging Interaction Toolkit (MITK, www.mitk.org), which is based on the Insight Toolkit (ITK) and the Visualization Toolkit (VTK), can be used to rapidly build interactive systems that provide curved reformations. MITK supports curved reformations not only for images, but also for other data types (e.g., surfaces). Besides visualizations of curved reformations, which can be combined and are kept consistent with other two- and three-dimensional views of the data, interactions on such non-planar manifolds are supported. The developer only has to define the curved manifold, everything else is dealt with by the toolkit. We demonstrate these capabilities by means of a tool for mapping of coronary vessel trees.
Segmentation and Rendering
icon_mobile_dropdown
Efficient 3D nonlinear warping of computed tomography: two high-performance implementations using OpenGL
We have implemented two hardware accelerated Thin Plate Spline (TPS) warping algorithms. The first algorithm is a hardware-software approach (HW-TPS) that uses OpenGL Vertex Shaders to perform a grid warp. The second is a Graphics Processor based approach (GPU-TPS) that uses the OpenGL Shading Language to perform all warping calculations on the GPU. Comparison with a software TPS algorithm was used to gauge the speed and quality of both hardware algorithms. Quality was analyzed visually and using the Sum of Absolute Difference (SAD) similarity metric. Warping was performed using 92 user-defined displacement vectors for 512x512x173 serial lung CT studies, matching normal-breathing and deep-inspiration scans. On a Xeon 2.2 Ghz machine with an ATI Radeon 9800XT GPU the GPU-TPS required 26.1 seconds to perform a per-voxel warp compared to 148.2 seconds for the software algorithm. The HW-TPS needed 1.63 seconds to warp the same study while the GPU-TPS required 1.94 seconds and the software grid transform required 22.8 seconds. The SAD values calculated between the outputs of each algorithm and the target CT volume were 15.2%, 15.4% and 15.5% for the HW-TPS, GPU-TPS and both software algorithms respectively. The computing power of ubiquitous 3D graphics cards can be exploited in medical image processing to provide order of magnitude acceleration of nonlinear warping algorithms without sacrificing output quality.
Poster Session
icon_mobile_dropdown
A 3D image analysis tool for SPECT imaging
We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.
An interactive 3D visualization and manipulation tool for effective assessment of angiogenesis and arteriogenesis using computed tomographic angiography
Li Shen, Ling Gao, Zhenwu Zhuang, et al.
This paper presents IVM, an Interactive Vessel Manipulation tool that can help make effective and efficient assessment of angiogenesis and arteriogenesis in computed tomographic angiography (CTA) studies. IVM consists of three fundamental components: (1) a visualization component, (2) a tracing component, and (3) a measurement component. Given a user-specified threshold, IVM can create a 3D surface visualization based on it. Since vessels are thin and tubular structures, using standard isosurface extraction techniques usually cannot yield satisfactory reconstructions. Instead, IVM directly renders the surface of a derived binary 3D image. The image volumes collected in CTA studies often have a relatively high resolution. Thus, compared with more complicated vessel extraction and visualization techniques, rendering the binary image surface has the advantages of being effective, simple and fast. IVM employs a semi-automatic approach to determine the threshold: a user can adjust the threshold by checking the corresponding 3D surface reconstruction and make the choice. Typical tracing software often defines ROIs on 3D image volumes using three orthogonal views. The tracing component in IVM takes one step further: it can perform tracing not only on image slices but also in a 3D view. We observe that directly operating on a 3D view can help a tracer identify ROIs more easily. After setting a threshold and tracing an ROI, a user can use IVM's measurement component to estimate the volume and other parameters of vessels in the ROI. The effectiveness of the IVM tool is demonstrated on rat vessel/bone images collected in a previous CTA study.
Visualization of ultrafast phenomena during the laser-induced lithotripsy
Optical coherent techniques, inteferometry and microscopy are applied for visualization of phenomena associated with laser-based lithotripsy. Shadowgraphy and ballistic imaging is used to visualize the phenomena generated around a stone during the action of a laser pulse. Results are confirmed using optical and electron microscopy.
Adaptive spatial-temporal filtering applied to x-ray fluoroscopy angiography
Gert Schoonenberg, Marc Schrijver, Qi Duan, et al.
Adaptive filtering of temporally varying X-ray image sequences acquired during endovascular interventions can improve the visual tracking of catheters by radiologists. Existing techniques blur the important parts of image sequences, such as catheter tips, anatomical structures and organs; and they may introduce trailing artifacts. To address this concern, an adaptive filtering process is presented to apply temporal filtering in regions without motion and spatial filtering in regions with motion. The adaptive filtering process is a multi-step procedure. First a normalized motion mask that describes the differences between two successive frames is generated. Secondly each frame is spatially filtered using the specific motion mask to specify different types of filtering in each region. Third an IIR filter is then used to combine the spatially filtered image with the previous output image; the motion mask thus serves as a weighted input mask to determine how much spatial and temporal filtering should be applied. This method results in improving both the stationary and moving fields. The visibility of static anatomical structures and organs increases, while the motion of the catheter tip and motion of anatomical structures and organs remain unblurred and visible during interventional procedures.
Prediction of 3D internal organ position from skin surface motion: results from electromagnetic tracking studies
An effective treatment method for organs that move with respiration (such as the lungs, pancreas, and liver) is a major goal of radiation medicine. In order to treat such tumors, we need (1) real-time knowledge of the current location of the tumor, and (2) the ability to adapt the radiation delivery system to follow this constantly changing location. In this study, we used electromagnetic tracking in a swine model to address the first challenge, and to determine if movement of a marker attached to the skin could accurately predict movement of an internal marker embedded in an organ. Under approved animal research protocols, an electromagnetically tracked needle was inserted into a swine liver and an electromagnetically tracked guidewire was taped to the abdominal skin of the animal. The Aurora (Northern Digital Inc., Waterloo, Canada) electromagnetic tracking system was then used to monitor the position of both of these sensors every 40 msec. Position readouts from the sensors were then tested to see if any of the movements showed correlation. The strongest correlations were observed between external anterior-posterior motion and internal inferior-superior motion, with many other axes exhibiting only weak correlation. We also used these data to build a predictive model of internal motion by taking segments from the data and using them to derive a general functional relationship between the internal needle and the external guidewire. For the axis with the strongest correlation, this model enabled us to predict internal organ motion to within 1 mm.
Needle targeting under C-arm fluoroscopy servoing
Cristian Mihaescu, Luis Ibanez, Mihai Mocanu, et al.
This paper describes a method for translational and orientational alignment of a robotic needle driver based on image servoing and x-ray fluoroscopy. The translational process works by segmenting the needle in a frame-grabbed fluoroscopic image and then commanding the robot to automatically move the needle tip to the skin entry point. The orientational alignment is then completed based on five different positions of the needle tip. Previously reported fluoroscopy servoing methods use complex robot-image registration algorithms, fiducial markers, and two or more dissimilar views that included moving the fluoroscope. Our method aligns the needle using one setting of the fluoroscope so that it does not need to be moved during the alignment process. Sample results from both the translational and orientational steps are included.
Computer-aided diagnosis for prostate cancer using support vector machine
Samar S. Mohamed, Magdy M. A. Salama
The work in this paper aims for analyzing texture features of the prostate using Trans-Rectal Ultra-Sound images (TRUS) images for tissue characterization. This research is expected to assist beginner radiologists with the decision making. Moreover it will also assist in determining the biopsy locations. Texture feature analysis is composed of four stages. The first stage is automatically identifying Regions Of Interest (ROI), a step that was usually done either by an expert radiologist or by dividing the whole image into smaller squares that represent regions of interest. The second stage is extracting the statistical features from the identified ROIs. Two different statistical feature sets were used in this study; the first is Grey Level Dependence Matrix features. The second feature set is Grey level difference vector features. These constructed features are then ranked using Mutual Information (MI) feature selection algorithm that maximizes MI between feature and class. The obtained feature sets, the combined feature set as well as the reduced feature subset were examined using Support Vector Machine (SVM) classifier, a well established classifier that is suitable for noisy data such as those obtained from the ultrasound images. The obtained sensitivity is 83.3%, specificity ranges from 90% to 100% and accuracy ranges from 87.5% to 93.75%.
Automated extraction of subdural electrode grid from post-implant MRI scans for epilepsy surgery
This paper presents an automated algorithm for extraction of Subdural Electrode Grid (SEG) from post-implant MRI scans for epilepsy surgery. Post-implant MRI scans are corrupted by the image artifacts caused by implanted electrodes. The artifacts appear as dark spherical voids and given that the cerebrospinal fluid is also dark in T1-weigthed MRI scans, it is a difficult and time-consuming task to manually locate SEG position relative to brain structures of interest. The proposed algorithm reliably and accurately extracts SEG from post-implant MRI scan, i.e. finds its shape and position relative to brain structures of interest. The algorithm was validated against manually determined electrode locations, and the average error was 1.6mm for the three tested subjects.