Proceedings Volume 4319

Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures

Seong Ki Mun
cover
Proceedings Volume 4319

Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures

Seong Ki Mun
View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 28 May 2001
Contents: 12 Sessions, 83 Papers, 0 Presentations
Conference: Medical Imaging 2001 2001
Volume Number: 4319

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Modeling and Simulation
  • Registration
  • Navigation and Tracking
  • Clinical Applications I
  • Clinical Applications II
  • Modeling and Simulation
  • Clinical Applications III
  • Clinical Applications IV
  • Display
  • Visualization I
  • Visualization II
  • Virtual Reality
  • Poster Session
Modeling and Simulation
icon_mobile_dropdown
Finite element modeling of tissue retraction and resection for preoperative neuroimage compensation concurrent with surgery
Keith D. Paulsen, Michael I. Miga, David W. Roberts, et al.
Compensation for intraoperative tissue motion in the registration of preoperative image volumes with the OR is important for improving the utility of image guidance in the neurosurgery setting. Model-based strategies for neuroimage compensation are appealing because they offer the prospect of retaining the high-resolution preoperative information without the expense and complexity associated with full volume intraoperative scanning. Further, they present opportunities to integrate incomplete or sparse, partial volume sampling of the surgical field as a guide for full volume estimation and subsequent compensation of the preoperative images. While potentially promising, there are a number of unresolved difficulties associated with deploying computational models for this purpose. For example, to date they have only been successful in representing the tissue motion that occurs during the earliest stages of neurosurgical intervention and have not addressed the later more complex events of tissue retraction and resection. IN this paper, we develop a mathematical framework for implementing retraction and resection within the context of finite element modeling of brain deformation using the equations of linear consolidation. Specifically, we discuss the critical boundary conditions applied at the new tissue surfaces created by these respective interventions and demonstrate the ability to model compound events where updated image volumes are generated in succession to represent the significant occurrences of tissue deformation which take place during the course of surgery. In this regard, we show image compensation for an actual OR case involving the implantation of a subdural electrode array for recording neural activity.
Simulating patient-specific heart shape and motion using SPECT perfusion images with the MCAT phantom
Tracy L. Faber, Ernest V. Garcia, David S. Lalush, et al.
The spline-based Mathematical Cardiac Torso (MCAT) phantom is a realistic software simulation designed to simulate single photon emission computed tomographic (SPECT) data. It incorporates a heart model of known size and shape; thus, it is invaluable for measuring accuracy of acquisition, reconstruction, and post-processing routines. New functionality has been added by replacing the standard heart model with left ventricular (LV) epicaridal and endocardial surface points detected from actual patient SPECT perfusion studies. LV surfaces detected from standard post-processing quantitation programs are converted through interpolation in space and time into new B-spline models. Perfusion abnormalities are added to the model based on results of standard perfusion quantification. The new LV is translated and rotated to fit within existing atria and right ventricular models, which are scaled based on the size of the LV. Simulations were created for five different patients with myocardial infractions who had undergone SPECT perfusion imaging. Shape, size, and motion of the resulting activity map were compared visually to the original SPECT images. In all cases, size, shape and motion of simulated LVs matched well with the original images. Thus, realistic simulations with known physiologic and functional parameters can be created for evaluating efficacy of processing algorithms.
Validation of linear elastic model for soft tissue simulation in craniofacial surgery
Evgeny Gladilin, Stefan Zachow, Peter Deuflhard, et al.
Physically based soft tissue modeling is a state of the art in computer assisted surgery (CAS). But even such a sophisticated approach has its limits. The biomechanic behavior of soft tissue is highly complex, so that simplified models have to be applied. Under assumption of small deformations, usually applied in soft tissue modeling, soft tissue can be approximately described as a linear elastic continuum. Since there exist efficient techniques for solving linear partial differential equations, the linear elastic model allows comparatively fast calculation of soft tissue deformation and consequently the prediction of a patient's postoperative appearance. However, for the calculation of large deformations, which are not unusual in craniofacial surgery, this approach can implicate substantial error depending on the intensity of the deformation. The monitoring of the linearization error could help to estimate the scope of validity of calculations upon user defined precision. In order to quantify this error one even do not need to know the correct solution, since the linear theory implies the appropriate instruments for error detection in itself.
Visually guided spine biopsy simulator with force feedback
Jong Beom Ra, Sung Min Kwon, Jin Kook Kim, et al.
A new surgical simulator is developed for spine needle biopsy, that provides realistic visual and force feedback to a trainee in the PC environment. This system is composed of four parts: a 3D human model, visual feedback tool, force feedback device, and an evaluation section. The 3D human model includes multi-slice XCT images, segmentation results, and force-feedback parameters. A block-based technique is adopted for efficient handling of large amounts of data and for easy control of rendering parameters such as opacity. For visual feedback, we implement a virtual CT console box and a 3D visualization tool providing MIP, MPR, summed voxel projection, and realistic 3D color volume rendering view. The visualization tool is for interactive 3D path planning. A haptic device is used to provide force feedback to the biopsy needle during simulation. The interactive force is generated in a voxel-based manner. After each simulation, the evaluation section provides a performance analysis to the trainee. We implemented the system by attaching a 3DOF PHANToMTM device to a PC with 600MHz Pentium III Dual CPUs and 512Mbyte RAM.
Transurethral ultrasound of the prostrate for applications in prostrate brachytherapy: analysis of phantom and in-vivo data
David Richard Holmes III, Brian J. Davis, Charles Bruce, et al.
3D Trans-Urethral Ultrasound (TUUS) imaging is a new imaging technique for the diagnosis and treatment of prostate disease. Our current research focuses on the potential of TUUS in therapy guidance during tansperineal interstitial permanent prostate brachytherapy (TIPPB). TUUS may complement of potentially replace x-ray fluoroscopy and TRUS in providing data for determining the prostate boundary and radiation source locations. Prostate boundary detection and source localization using TUUS were tested on an ultrasound- equivalent prostate phantom and ina patient during TIPPB. Data collection was conducted with a 10 French, 10 MHz ultrasound catheter controlled by an Acuson SequoiaTM workstation. 2D and 3D TUUS scans were acquired after radioactive seeds were placed in the phantom and in the patient. Data was reconstructed, processed, and analyzed using Analyze software. Segmentation of the prostate boundary was performed semi-automatically, and seed segmentation was performed manually. Image artifacts in TUUS data resulted in incorrect reconstruction of the seeds. Intelligent processing of the seed data improved reconstruction. Comparison to the CT data suggests that TUUS dat provides: 1) greater spatial resolution, 2) greater temporal resolution and 3) better contrast for soft tissue differentiation. The reconstructed source sizes and locations were measured and found accurate. Placement of the TUUS catheter into the urethra provides excellent 2D sections which can be used to acquire volumetric data for 3D analysis of the prostate and radioactive sources. Preliminary results suggest that TUUS will be useful for guidance of seed placement, post-implant seed localization, and intra-operative dosimetry.
Registration
icon_mobile_dropdown
Robust registration method for interventional MRI-guided thermal ablation of prostate cancer
Baowei Fei, Andrew Wheaton, Zhenghong Lee, et al.
We are investigating methods to register live-time interventional magnetic resonance imaging (iMRI) slice images with a previously obtained, high resolution MRI image volume. The immediate application is for iMRI-guided treatments of prostate cancer. We created and evaluated a slice-to-volume mutual information registration algorithm for MR images with special features to improve robustness. Features included a multi-resolution approach and automatic restarting to avoid local minima. We acquired 3D volume images from a 1.5 T MRI system and simulated iMRI images. To assess the quality of registration, we calculated 3D displacement on a voxel-by-voxel basis over a volume of interest between slice-to-volume registration and volume-to- volume registrations that were previously shown to be quite accurate. More than 500 registration experiments were performed on MR images of volunteers. The slice-to-volume registration algorithm was very robust for transverse images covering the prostate. A 100% success rate was achieved with an acceptance criterion of <1.0 mm displacement error over the prostate. Our automatic slice-to-volume mutual information registration algorithm is robust and probably sufficiently accurate to aid in the application of iMRI- guided thermal ablation of prostate cancer.
Registration of portal images for online correction of positioning errors during radiation therapy of prostate cancer
Julia Vlad, Sebastian Eulenstein, Waldemar Wlodarczyk, et al.
Radiation therapy of prostate cancer requires a high geometric accuracy of dose delivery. Unfortunately, this accuracy can be hardly obtained because of imperfect patients repositioning throughout the preparation and execution of the treatment. In order to correct for such errors we developed methods for online registration of portal images (PI) and digitally reconstructed radiographs (DRR). To improve the preconditions for registration of the Pis and DRRs image preprocessing, such as contrast-limited adaptive histogram equalization (CLAHE) of the Pis, noise and edge filtering of the Pis and the DRRs, has been performed. Then, similarity measures as mean square error, cross correlation and mutual information were investigated. A fully automatic bony-based registration method has been developed. As a reference method a semi-automatic stereometric fiducial-based registration has also been developed. This method uses gold seeds implanted into the prostate as internal fiducial markers for assessment of the similarity measures. Small lead spheres as well were adhered as external fiducial markers on the Alderson phantom. This method was also used for the verification of the automatic bony-based registration. The registration of repositioning was then performed fully automatically in less than 30 s with the Alderson phantom and with the patient data. At the present stage, there remains a registration error of less than 2 mm.
Improving spatial resolution of multileaf-collimator-defined radiation treatment field
Q. Jackie Wu, Zhiheng Wang, Claudio H. Sibata
A prototype high definition multi-leaf collimator system (HDI) has been developed and installed on the linear accelerator for use in conformal radiotherapy. The HDI technique utilizes the dynamic shift of the 3D-target volume to feather the multi-leaf collimator defined field edges. During each feathering, the leaf positions are adjusted according to the updated target image projected into the MLC plane. The purpose of this study is to demonstrate that this device can improve spatial resolution of a conformal radiation therapy treatment. The results of this study indicate that the HDI technique can be a useful tool for treating small, or highly irregular shaped targets, and for sparing adjacent critical structures for certain cases.
Feature extraction, analysis, and 3D visualization of local lung regions in volumetric CT images
The purpose of the work was to develop image functions for volumetric segmentation, feature extraction, and enhanced 3D visualization of local regions using CT datasets of human lungs. The system is aimed to assist the radiologist in the analysis of lung nodules. Volumetric datasets consisting of 30-50 thoracic helical low-dose CT slices were used in the study. The 3D topological characteristics of local structures including bronchi, blood vessels, and nodules were computed and evaluated. When a location of a region of interest is identified, the computer would automatically compute size, surface of the area, and normalized shape index of the suspected lesion. The developed system can also allow the user to perform interactive operation for evaluation of lung regions and structures through a user- friendly interface. These functions provide the user with a powerful tool to observe and investigate clinically interesting regions through unconventional radiographic viewings and analyses. The developed functions can also be used to view and analyze patient's lung abnormalities in surgical planning applications. Additionally, we see the possibility of using the system as a teaching tool for correlating anatomy of lungs.
Implementing PET-guided biopsy: integrating functional imaging data with digital x-ray mammography cameras
Irving N. Weinberg M.D., Valera Zawarzin, Roberto Pani, et al.
Purpose: Phantom trials using the PET data for localization of hot spots have demonstrated positional accuracies in the millimeter range. We wanted to perform biopsy based on information from both anatomic and functional imaging modalities, however we had a communication challenge. Despite the digital nature of DSM stereotactic X-ray mammography devices, and the large number of such devices in Radiology Departments (approximately 1600 in the US alone), we are not aware of any methods of connecting stereo units to other computers in the Radiology department. Methods: We implemented a local network between an external IBM PC (running Linux) and the Lorad Stereotactic Digital Spot Mammography PC (running DOS). The application used IP protocol on the parallel port, and could be run in the background on the LORAD PC without disrupting important clinical activities such as image acquisition or archiving. With this software application, users of the external PC could pull x-ray images on demand form the Load DSM computer. Results: X-ray images took about a minute to ship to the external PC for analysis or forwarding to other computers on the University's network. Using image fusion techniques we were able to designate locations of functional imaging features as the additional targets on the anatomic x-rays. These pseudo-features could then potentially be used to guide biopsy using the stereotactic gun stage on the Lorad camera. New Work to be Presented: A method of transferring and processing stereotactic x-ray mammography images to a functional PET workstation for implementing image-guided biopsy.
Medical image segmentation using high-performance computer clusters
Ruben Cardenes-Almeida, Juan Ruiz-Alzola, Ron Kikinis, et al.
A statistical classification algorithm, for MRI segmentation, based on the k Nearest Neighbor rule (kNN) has been implemented with Message Passing Interface (MPI) by partitioning the dataset into similar sized subvolumes and delivering each part to one processor inside a cluster. We have tested the algorithm in two different CPU architectures (SPARC and Intel) and four different configurations including a Beowulf cluster, two Sun clusters and a symmetric multiprocessor. The experiments provide a good speedup in all the cases and show a very good performance/price ratio in the PC-Linux cluster. We present results using a three channel, high resolution original dataset in times less than two minutes in the best cases and we use the segmented maps to make clinically relevant 3D visualizations in interactive times.
Navigation and Tracking
icon_mobile_dropdown
Consequences of fiducial marker error on three-dimensional computer animation of the temporomandibular joint
J. Ken Leader III, J. Robert Boston, Thomas E. Rudy, et al.
Jaw motion has been used to diagnose jaw pain patients, and we have developed a 3D computer animation technique to study jaw motion. A customized dental clutch was worn during motion, and its consistent and rigid placement was a concern. The experimental protocol involved mandibular movements (vertical opening) and MR imaging. The clutch contained three motion markers used to collect kinematic data and four MR markers used as fiducial markers in the MR images. Fiducial marker misplacement was mimicked by analytically perturbing the position of the MR markers +/- 2, +/- 4, and +/- 6 degrees in the three anatomical planes. The percent difference between the original and perturbed MR marker position was calculated for kinematic parameters. The maximum difference across all perturbations for axial rotation, coronal rotation, sagittal rotation, axial translation, coronal translation, and sagittal translation were 176.85%, 191.84%, 0.64%, 9.76%, 80.75%, and 8.30%, respectively, for perturbing all MR markers, and 86.47%, 93.44%, 0.23%, 7.08%, 42.64%, and 13.64%, respectively, for perturbing one MR marker. The parameters representing movement in the sagittal plane, the dominant plane in vertical opening, were determined to be reasonably robust, while secondary movements in the axial and coronal planes were not considered robust.
Single-camera system for optically tracking freehand motion in 3D: experimental implementation and evaluation
Mark K. Lee, H. Neale Cardinal, Aaron Fenster
A single camera optical tracking system has been implemented to track the freehand motion of an object in 3D. The system consists of a single standard NTSC video camera and a frame grabber that is utilized to digitize the image of a target plate with four non-collinear fiducial marks. The tracking algorithm is based on texture mapping to determine the rotation and translation of the plate (and hence object) from the images of four fiducial marks. Utilizing a SGI VW320 workstation (with integrated frame grabber) and a CCD video camera, tracking accuracy of better than 0.1 mm for object velocity of 4.0 cm/s was achievable at full video frame rate for all directions.
Remotely operated MR-guided neurosurgical device in MR operating room
Haiying Liu, Walter A. Hall, Charles L. Truwit
A robust near real-time MRI based surgical guidance and navigation scheme has been developed, validated and used. The key concept of the method is to use intra-operative MRI to facilitate the trajectory alignment process of a biopsy needle in neurobiopsy. Since the trajectory corresponding to the biopsy needle pivoted at an entry point on patient skull has two degrees of freedom, the orientation of the needle can be tracked using a 2D imaging plane placed perpendicular to the desired trajectory. Using a near real- time visual feedback during the adjustment of an alignment guidance device, the required trajectory alignment was translated into a simple in-plane targeting task on computer monitor. The orientation adjustment was achieved remotely via a set of MR-compatible strings, which were connected to a joystick. The concept of MR-guided targeting was successfully validated on a phantom set-up. This MR based guidance technique has practically allowed neurosurgeons to accomplish the required needle alignment to an arbitrary trajectory remotely in a straight forward procedure on any conventional MR scanner. Before needle insertion, the trajectory can be validated. Two successful biopsy cases using the new methodology and device have shown that the remotely operated device under MR-guidance is both effective and accurate for neurosurgery.
Intraoperative identification and display of eloquent cortical regions
During a typical image-guided neurosurgery procedure, the surgeon used anatomical information from tomographic image sets to help guide the surgery. These images provide high- level details of the patient's anatomy. The images do not, however, provide the surgeon with information regarding brain function. The identification of cortical function in addition to the display of tomographic images during surgery would allow the surgeon to visualize critical areas of the anatomy. This would be beneficial during surgical planning and procedures by identifying eloquent cortical regions (such as speech, sensory, and motor areas) that should be avoided. We have designed and implemented a system for recording and displaying cortical brain function during image-guided surgery. Brain function is determined using an optically tracked cortical stimulator. The image-space location of each stimulation event is recorded, and the user has the ability to label this location according to function type. Functional data can be displayed on both tomographic and rendered images. Tracking accuracy of the cortical stimulator has been determined by comparing its position to that of a tracked surgical probe with known localizing accuracy.
Three-dimensional ultrasound-guided breast biopsy system
Wendy Lani Smith, Kathleen J. M. Surry, Laura Campbell, et al.
We introduce a mechanically constrained, 3D ultrasound- guided core-needle breast biopsy device. With modest breast compression, 3D ultrasound scans localize suspicious masses. A biopsy needle is mechanically guided into position for firing into the sampling region. Th needle is parallel to the transducer, allowing real-time guidance during needle insertion. Lesion sampling is verified by another ultrasound image after firing. Two procedures quantified targeting accuracy of this apparatus. First, we biopsied eleven breast phantoms containing 123 embedded, cylindrical lesions constructed from PVA-C (poly(vinyl alcohol) cryogel) with diameters ranging from 1.6 to 15.9mm. Identification of the colored lesion in the biopsy sample and analysis of the post-biopsy US images provided a model for the success rates. Using this, we predict that our apparatus will require six passes to biopsy a 3.0 mm lesion with 99% confidence. For the second experiment, agar phantoms were embedded with four rows of 0.8mm stainless steel beads. A 14-gauge needle was inserted to each bead position seen in a 3D ultrasound scan and the tip position was compared to the pre-insertion bead position. The inter-observer standard errors of measurement were less than 0.15 and 0.28mm for the bead and needle tip positions, respectively. The off-axis 3D 95% confidence intervals were determined to have widths between 0.43 and 1.71mm, depending on direction and bead position.
Clinical Applications I
icon_mobile_dropdown
Clinical experience with a computer-aided diagnosis system for automatic detection of pulmonary nodules at spiral CT of the chest
Dag Wormanns, Martin Fiebich, Mustafa Saidi, et al.
The purpose of the study was to evaluate a computer aided diagnosis (CAD) workstation with automatic detection of pulmonary nodules at low-dose spiral CT in a clinical setting for early detection of lung cancer. Two radiologists in consensus reported 88 consecutive spiral CT examinations. All examinations were reviewed using a UNIX-based CAD workstation with a self-developed algorithm for automatic detection of pulmonary nodules. The algorithm was designed to detect nodules with at least 5 mm diameter. The results of automatic nodule detection were compared to the consensus reporting of two radiologists as gold standard. Additional CAD findings were regarded as nodules initially missed by the radiologists or as false positive results. A total of 153 nodules were detected with all modalities (diameter: 85 nodules <5mm, 63 nodules 5-9 mm, 5 nodules >= 10 mm). Reasons for failure of automatic nodule detection were assessed. Sensitivity of radiologists for nodules >=5 mm was 85%, sensitivity of CAD was 38%. For nodules >=5 mm without pleural contact sensitivity was 84% for radiologists at 45% for CAD. CAD detected 15 (10%) nodules not mentioned in the radiologist's report but representing real nodules, among them 10 (15%) nodules with a diameter $GREW5 mm. Reasons for nodules missed by CAD include: exclusion because of morphological features during region analysis (33%), nodule density below the detection threshold (26%), pleural contact (33%), segmentation errors (5%) and other reasons (2%). CAD improves detection of pulmonary nodules at spiral CT significantly and is a valuable second opinion in a clinical setting for lung cancer screening. Optimization of region analysis and an appropriate density threshold have a potential for further improvement of automatic nodule detection.
Computer-aided detection of lung cancer on chest radiographs: algorithm performance vs. radiologists' performance by size of cancer
Our goal was to perform a pre-clinical test of the performance of a new pre-commercial system for detection of primary early-stage lung cancer on chest radiographs developed by Deus Technologies, LLC. The RapidScreenTM RS 2000 System integrates state of the art technical development in this field.
Virtual MR tagging on three-dimensional deformable models
Heejeong Kim, Jinah Park
Magnetic Resonance (MR) tagging techniques have recently received much attention for the generation of suitable data sets for cardiac motion analysis. Several different techniques have been developed for the analysis of 3D cardiac wall motion from such data sets. Unfortunately, due to the lack of gold standard test data sets, most of the techniques are not fully evaluated. We have developed a virtual MR tagging system for evaluation of such 3D motion estimation techniques. It generates virtual tagging images, in particular SPAMM images, of the deformable model that undergoes a predetermined motion sequence in a virtual environment. The input to the system is a set of vertices that characterizes the shape of the volumetric model at each time phase. After the user interactively specifies the imaging and tagging parameters, the system automatically marks the intersections between the image and tagging planes within the model tissue to track its motion, computing the deformed tagging lines over the time sequence. The output is the set of tomographic images of the virtual object with SPAMM grid patterns reflecting the known 3D formation field, and it can be used to evaluate an arbitrary cardiac motion estimation technique.
Incorporation of surface-based deformations for updating images intraoperatively
Patient-to-image misalignment becomes exacerbated by common surgical events such as brain sag, drug interactions, retraction, and resection. One strategy to remedy this mis- registration is to employ computational models in conjunction with low-cost intraoperatively-acquired data (e.g. surface tracking, and co-registered ultrasound) to deform preoperative imaging data to account for OR actions. In this paper, we present preliminary data from a cortical surface scanning system and study the impact of surface- based information on model-updates. Preliminary data is presented using a 3D laser scanning technology in conjunction with an iterative closest point (ICP) algorithm to register and track phantom and ex vivo data. Simulations are presented to analyze the direct use of displacement data versus modeling the underlying physical load in a clinical example of gravity-induced deformation. Results demonstrate dramatic differences in subsurface deformation fields highlighting that the nature of the surgical load (i.e. surface or body force) must be thoughtfully discriminated to accurately update images. Furthermore, the results suggest that the application of surface displacements to update image volumes must be consistent with the physical origin of deformation rather than applied in a direct interpolative sense.
Orthopedic surgical analyzer for percutaneous vertebroplasty
Gye Rae Tack, Hyung Guen Choi, Do Hyung Lim, et al.
Since the spine is one of the most complex joint structures in the human body, its surgical treatment requires careful planning and high degree of precision to avoid any unwanted neurological compromises. In addition, comprehensive biomechanical analysis can be very helpful because the spine is subject to a variety of load. In case for the osteoporotic spine in which the structural integrity has been compromised, it brings out the double challenges for a surgeon both clinically and biomechanically. Thus, we have been developing an integrated medical image system that is capable of doing the both. This system is called orthopedic surgical analyzer and it combines the clinical results from image-guided examination and the biomechanical data from finite element analysis. In order to demonstrate its feasibility, this system was applied to percutaneous vertebroplasty. Percutaneous vertebroplasty is a surgical procedure that has been recently introduced for the treatment of compression fracture of the osteoporotic vertebrae. It involves puncturing vertebrae and filling with polymethylmethacrylate (PMMA). Recent studies have shown that the procedure could provide structural reinforcement for the osteoporotic vertebrae while being minimally invasive and safe with immediate pain relief. However, treatment failures due to excessive PMMA volume injection have been reported as one of complications. It is believed that control of PMMA volume is one of the most critical factors that can reduce the incidence of complications. Since the degree of the osteoporosis can influence the porosity of the cancellous bone in the vertebral body, the injection volume can be different from patient to patient. In this study, the optimal volume of PMMA injection for vertebroplasty was predicted based on the image analysis of a given patient. In addition, biomechanical effects due to the changes in PMMA volume and bone mineral density (BMD) level were investigated by constructing clinically relevant finite element models. In conclusion, we were able to demonstrate the feasibility of our orthopedic surgical analyzer in a case for percutaneous vertebroplasty.
Clinical Applications II
icon_mobile_dropdown
Computerized lateral endoscopic approach to invertebral bodies
Hamid Reza Abbasi M.D., Sanaz Hariri, Daniel Kim, et al.
Spinal surgery is often necessary to ease back pain symptoms. Neuronavigation (NN) allows the surgeon to localize the position of his instruments in 3D using pre- operative CT scans registered to intra-operative marker positions in cranial surgeries. However, this tool is unavailable in spinal surgeries for a variety of reasons. For example, because of the spine's many degrees of freedom and flexibility, the geometric relationship of the skin to the internal spinal anatomy is not fixed. Guided by the currently available imperfect 2D images, it is difficult for the surgeon to correct a patient's spinal anomaly; thus surgical relief of back pain is often only temporary. The Image Guidance Laborator's (IGL) goal is to combine the direct optical control of traditional endoscopy with the 3D orientation of NN. This powerful tool requires registration of the patient's anatomy to the surgical navigation system using internal landmarks rather than skin markers. Pre- operative CT scans matched with intraoperative fluoroscopic images can overcome the problem of spinal movement in NN registration. The combination of endoscopy with fluoroscopic registration of vertebral bodies in a NN system provides a 3D intra-operative navigational system for spinal neurosurgery to visualize the internal surgical environment from any orientation in real time. The accuracy of this system integration is being evaluated by assessing the success of nucleotomies and marker implantations guided by NN-registered endoscopy.
Evaluation of a new method for stenosis quantification from 3D x-ray angiography images
Fabienne Betting, Gilles Moris, Jerome Knoplioch, et al.
A new method for stenosis quantification from 3D X-ray angiography images has been evaluated on both phantom and clinical data. On phantoms, for the parts larger or equal to 3 mm, the standard deviation of the measurement error has always found to be less or equal to 0.4 mm, and the maximum measurement error less than 0.17 mm. No clear relationship has been observed between the performances of the quantification method and the acquisition FoV. On clinical data, the 3D quantification method proved to be more robust to vessel bifurcations than its 3D equivalent. On a total of 15 clinical cases, the differences between 2D and 3D quantification were always less than 0.7 mm. The conclusion is that stenosis quantification from 3D X-4ay angiography images is an attractive alternative to quantification from 2D X-ray images.
Volumetric subtraction angiography for image-guided therapy
Derek E. Hyde, Allan J. Fox, Terence M. Peters, et al.
Our goal is to improve the visualization of the intracranial vasculature during interventional procedures where radiographically dense objects would normally hinder clinical assessment. We describe our technique of Volumetric Subtraction Angiography (VSA), which removes bone, metal objects, and associated artifacts from the 3D contrast-enhanced image of the patient's vasculature. This work utilizes a prototype Computed Rotational Angiography (CRA) system that uses a C-arm mounted x-ray image intensifier to acquire 2D projections of the vasculature. A modified cone-beam computed tomography (CT) algorithm is then used to reconstruct a 3D image with isotropic voxels. Two volumes of data are acquired, and the anatomical mask is volumetrically subtracted (voxel-by-voxel) from the intra- arterial, contrast-enhanced image, producing a 3Dangiogram.
Three-dimensional correlation of MR images to muscle tissue response for interventional MRI thermal ablation
Michael S. Breen, Tanya L. Lancaster, Roee S. Lazebnik M.D., et al.
We are treating tumors using radiofrequency (RF) ablation under interventional MRI (iMRI) guidance. We investigated the ability of MR to monitor the treated region by comparing MR thermal lesion images to cellular damage as seen histologically. Our new methodology allows 3D registration that should enable more accurate correlation than previous 2D methods. Using a low-field (0.2T) open magnet iMRI system for probe guidance, we applied RF ablation to the thigh muscle of four New Zealand White rabbits. To relate in vivo MR and histology images, we obtained intermediate ex vivo MR images and pictures of thick tissue slices obtained using a specially designed apparatus. Registration was done with a computer algorithm that matches tracks of needle fiducials placed near the tissue of interest. After registration, we determined the region inside the circular, hyperintense rim in MR closely corresponds to the region of necrosis as determined by histology on animals sacrificed 30 minutes after ablation. This is good evidence that iMRI images can be used for real-time feedback during thermal RF ablation treatments.
Surgically appropriate maximum intensity projections: quantization of vessel depiction and incorporation into surgical navigation
Integration of tomographic angiograms into neurosurgical navigation should decrease the probability of vascular injury and allow localization of vascular lesions. Information from angiograms is often presented using maximum intensity projections (MIPs), which provide a more intuitive presentation of 3D vascular structures. Conventional MIPs involve the whole image volume during ray casting. Our goal was to construct surgically appropriate MIPs that excluded information contralateral to the operation site and to quantify the accuracy of vessel depiction using this new method. For each angiogram slice, the center of mass (COM) was calculated. Together, the COM coordinates formed a boundary plane that clipped the contralateral information from ray casting. A separate depth buffer was created to preserve 3D information. MIPs were examined quantitatively using a mathematical model of the head containing vascular structures of known diameter. The vessel widths of the resulting MIPs were then measured and compared. To examine the spatial accuracy of MIP images, a vascular phantom was created, which had rigid vessels of known diameter and extrinsic fiducial markers to perform a physical to image space registration. Studies with the mathematical model showed that the vessels appeared smaller in MIPs than their actual diameters. This decrease is attributed to the statistical properties of the ray casting process that are affected by the pathlength. Studies with the vascular phantom show correct localization of the probe in tomographic and projective image space. From these studies, we concluded that additional methods for providing information concerning vessel proximity during surgical guidance should be investigated. Surgically appropriate MIPs provide comparable images to conventional MIPs; however, they allow more focus on the vascular structures in proximity to the target site.
Modeling and Simulation
icon_mobile_dropdown
Image-guided surgery and therapy: current status and future directions
Image-guided surgery and therapy is assuming an increasingly important role, particularly considering the current emphasis on minimally-invasive surgical procedures. Volumetric CT and MR images have been used now for some time in conjunction with stereotactic frames, to guide many neurosurgical procedures. With the development of systems that permit surgical instruments to be tracked in space, image-guided surgery now includes the use of frame-less procedures, and the application of the technology has spread beyond neurosurgery to include orthopedic applications and therapy of various soft-tissue organs such as the breast, prostate and heart. Since tracking systems allow image- guided surgery to be undertaken without frames, a great deal of effort has been spent on image-to-image and image-to- patient registration techniques, and upon the means of combining real-time intra-operative images with images acquired pre-operatively. As image-guided surgery systems have become increasingly sophisticated, the greatest challenges to their successful adoption in the operating room of the future relate to the interface between the user and the system. To date, little effort has been expended to ensure that the human factors issues relating to the use of such equipment in the operating room have been adequately addressed. Such systems will only be employed routinely in the OR when they are designed to be intuitive, unobtrusive, and provide simple access to the source of the images.
Clinical Applications III
icon_mobile_dropdown
CT-directed robotic biopsy testbed: motivation and concept
Kevin Robert Cleary, Dan S. Stoianovici, Neil D.W. Glossop, et al.
As a demonstration platform, we are developing a robotic biopsy testbed incorporating a mobile CT scanner, a small needle driver robot, and an optical localizer. This testbed will be used to compare robotically assisted biopsy to the current manual technique, and allow us to investigate software architectures for integrating multiple medical devices. This is a collaboration between engineers and physicians from three universities and a commercial vendor. In this paper we describe the CT-directed biopsy technique, review some other biopsy systems including passive and semi- autonomous devices, describe our testbed components, and present our software architecture. This testbed is a first step in developing the image-guided, robotically assisted, physician directed, biopsy systems of the future.
New automatic mode of visualizing the colon via Cine CT
Jayaram K. Udupa, Dewey Odhner, Harvey C. Eisenberg
Methods of visualizing the inner colonic wall by using CT images has actively been pursued in recent years in an attempt to eventually replace conventional colonoscopic examination. In spite of impressive progress in this direction, there are still several problems, which need satisfactory solutions. Among these, we address three problems in this paper: segmentation, coverage, and speed of rendering. Instead of thresholding, we utilize the fuzzy connectedness framework to segment the colonic wall. Instead of the endoscopic viewing mode and various mapping techniques, we utilize the central line through the colon to generate automatically viewing directions that are enface with respect to the colon wall, thereby avoiding blind spots in viewing. We utilize some modifications of the ultra fast shell rendering framework to ensure fast rendering speed. The combined effect of these developments is that a colon study requires an initial 5 minutes of operator time plus an additional 5 minutes of computational time and subsequently enface renditions are created in real time (15 frames/sec) on a 1 GHz Pentium PC under the Linux operating system.
Clinical Applications IV
icon_mobile_dropdown
Computer-aided osteotomy design for harvesting autologous bone grafts in reconstructive surgery
Zdzislaw Krol, Peter Zerfass, Bartosz von Rymon-Lipinski, et al.
Autologous grafts serve as the standard grafting material in the treatment of maxillofacial bone tumors, traumatic defects or congenital malformations. The pre-selection of a donor site depends primarily on the morphological fit of the available bone mass and the shape of the part that has to be transplanted. To achieve sufficient incorporation of the autograft into the host bone, precise planning and simulation of the surgical intervention based on 3D CT studies is required. This paper presents a method to identify an optimal donor site by performing an optimization of appropriate similarity measures between donor region and a given transplant. At the initial stage the surgeon has to delineate the osteotomy border lines in the template CT data set and to define a set of constraints for the optimization of appropriate similarity measures between donor region and a given transplant. At the initial stage the surgeon has to delineate the osteotomy border lines in the template CT data set and to define a set of constraints for the optimization task in the donor site CT data set. The following fully automatic optimization stage delivers a set of sub-optimal and optimal donor sites for a given template. All generated solutions can be explored interactively on the computer display using an efficient graphical interface. Reconstructive operations supported by our system were performed on 28 patients. We found that the operation time can be considerably shortened by this approach.
Three-dimensional visualization system as an aid for facial surgical planning
Sebastien Barre, Christine Fernandez-Maloigne, Patricia Paume, et al.
We present an aid for facial deformities treatment. We designed a system for surgical planning and prediction of human facial aspect after maxillo-facial surgery. We study the 3D reconstruction process of the tissues involved in the simulation, starting from CT acquisitions. 3D iso-surfaces meshes of soft tissues and bone structures are built. A sparse set of still photographs is used to reconstruct a 360 degree(s) texture of the facial surface and increase its visual realism. Reconstructed objects are inserted into an object-oriented, portable and scriptable visualization software allowing the practitioner to manipulate and visualize them interactively. Several LODs (Level-Of- Details) techniques are used to ensure usability. Bone structures are separated and moved by means of cut planes matching orthognatic surgery procedures. We simulate soft tissue deformations by creating a physically-based springs model between both tissues. The new static state of the facial model is computed by minimizing the energy of the springs system to achieve equilibrium. This process is optimized by transferring informations like participation hints at vertex-level between a warped generic model and the facial mesh.
Correlation of preoperative MRI and intraoperative 3D ultrasound to measure brain tissue shift
David G. Gobbi, Belinda K. H. Lee, Terence M. Peters
B-Mode ultrasound is often used during neurosurgery to provide intra-operative images of the brain though a craniotomy, but the use of 3D ultrasound during surgery is still in its infancy. We have developed a system that provides real-time freehand 3D ultrasound reconstruction at a reduced resolution. The reconstruction proceeds incrementally and the 3D image is overlayed, via a computer, on a pre-operative 3D MRI scan. This provides the operator with the necessary feedback to maintain a constant freehand sweep-rate, and also ensures that the sweep covers the desired anatomical volume. All of the ultrasound video frames are buffered, and a full-resolution, compounded reconstruction proceeds once the manual sweep is complete. We have also developed tools for manual tagging of homologous landmarks in the 3D MRI and 3D ultrasound volumes that use a piecewise cubic approximation of thin-plate spline interpolation to achieve interactive nonlinear registration and warping of the MRI volume to the ultrasound volume: Each time a homologous point-pair is identified by the use, the image of the warped MRI is updated on the computer screen after less than 0.5 s.
Intraoperative MR-guided DBS implantation for treating PD and ET
Haiying Liu, Robert E. Maxwell, Charles L. Truwit
Deep brain stimulator (DBS) implantation is a promising treatment alternative for suppressing the motor tremor symptoms in Parkinson disease (PD) patient. The main objective is to develop a minimally invasive approach using high spatial resolution and soft-tissue contrast MR imaging techniques to guide the surgical placement of DBS. In the MR-guided procedure, the high spatial resolution MR images were obtained intra-operatively and used to target stereotactically a specific deep brain location. The neurosurgery for craniotomy was performed in the front of the magnet outside of the 10 Gauss line. Aided with positional registration assembly for the stereotactic head frame, the target location (VIM or GPi or STN) in deep brain areas was identified and measured from the MR images in reference to the markers in the calibration assembly of the head frame before the burrhole prep. In 20 patients, MR- guided DBS implantations have been performed according to the new methodology. MR-guided DBS implantation at high magnetic field strength has been shown to be feasible and desirable. In addition to the improved outcome, this offers a new surgical approach in which intra-operative visualization is possible during intervention, and any complications such as bleeding can be assessed in situ immediately prior to dural closure.
Neurosurgery for functional disorders guided by multimodality imaging
Andres Santos, Javier Pascau, Manuel Desco, et al.
This paper presents a procedure for combining MR anatomical information and a stereotactic reference obtained from a CT study with a Leksell frame attached to the patient's head, in order to guide neurosurgery of functional disorders. MRI acquisition can be performed well before the surgery, without the stereotactic frame. The day of the intervention, after attaching the Leksell frame to the patient, a CT image (1.5 mm slices) is acquired. This study will provide the stereotactic reference and includes on ly the part of the brain where the disorder is located. Before surgery, physicians register the MRI with the CT using a procedure based on an automated algorithm (Mutual Information) and visually check the result. MRI is used to locate the target in the brain, while the frame visible in the CT allows to calculate the stereotactic 3D coordinates. Frame references are located on the MRI image allowing to calculate Leksell coordinates of any given point. When the exact position of relevant structures has been recorded, the physicians proceed with the surgery. The protocol has been tested in ten patients, showing a positive surgical outcome with a significant decrease of the functional disorders. The method has proved to be accurate enough, avoiding the use of stereotactical frames during the MR acquisition and making the clinical procedure simpler and faster.
Display
icon_mobile_dropdown
Visual CRT sharpness estimation using a fiducial marker set
Kevin S. Kohm, Andrew W. Cameron, Richard L. Van Metter
A visual estimation technique has been developed too quickly, yet quantitatively, determine the sharpness quality of CRT displays. While high-resolution camera measurement equipment accurately characterizes display sharpness, the equipment cost is high and the measurements are time consuming to perform. Previously reported visual sharpness assessment techniques are either qualitative or the quantitative measures do not possess adequate sensitivity. The rating scheme investigated in this study provides a practical solution for tracking monitor sharpness in a clinical environment. The target consists of high frequency, high contrast pattern with an embedded, magnified fiducial marker set based upon a Gaussian model for the CRT spot. The magnification of the marker set allows the reference to remain nearly invariant to the actual sharpness of the display. In this study, three commercially available diagnostic displays were evaluated, each at two luminance levels and seven static focus settings. High-resolution CCD camera measurements were acquired for each display and setting combination. The visual sharpness estimate target was then displayed and scored by observers. High correlation was found between the visual ratings and the photometric measurements. More importantly, the sensitivity of the target produced observer ratings, which distinguish between the measured CRT spot sizes at different focus levels.
Optimal display processing for digital radiography
Michael J. Flynn, Mary Couwenhoven, William R. Eyler, et al.
Display processing is used to transform digital radiography raw data in log-signal units to display values for presentation using a workstation or film printer. Radiographic appearance with respect to subject latitude and detail contrast varies significantly depending on the signal equalization and grayscale rendition used for processing. A human observer study was conducted to define the latitude and detail contrast that is judged optimal for a broad spectrum of chest radiographs. Raw data for 12 chest radiographs acquired with storage phosphor digital radiography systems were transformed using 52 different combinations of latitude and detail contrast. For specific latitude values, contrast was adjusted by varying the equalization gain. Three radiologists at three different medical centers evaluated the images. Each image was compared to a reference image using a calibrated display on a computer workstation. For PA views, processing that produced a detail contrast of 3.14 ((Delta) D/(Delta) logE) and latitude of 1.47 ((Delta) logE for (Delta) D = 1.75) was determined to be best for all cases and was achieved with an equalization gain of 2.64. For lateral views, a detail contrast of 3.42 and latitude of 1.17 was best for all cases (gain = 2.29). For individual cases, the preferred processing varied from the global average primarily with respect to latitude.
Advanced amorphous silicon thin film transistor active-matrix organic light-emitting displays design for medical imaging
Joo-Han Kim, Jerzy Kanicki
Constant-current, active-matrix organic light-emitting displays (AM-OLEDs) with the advanced hydrogenated amorphous silicon thin film transistor (a-Si:H TFT) pixel electrode circuits have been designed in our laboratory for medical applications. An extensive pixel electrode circuit simulation and analysis indicate that a continuous pixel electrode excitation can be achieved with these circuits, and a pixel electrode driving output current level up to 1.4 (mu) A can be reached with an a-Si:H TFT technology. Small feed-through voltage (few tenth of mV) that can be achieved with this circuit will enhance the display gray level controllability needed for medical imaging. Each pixel electrode has a threshold voltage compensation circuit to adjust the pixel electrode driving current level for threshold voltage shifts of both the organic light-emitting diodes (OLEDs) and the current driving a-Si:H TFT. For a 16-inch VGA full-color AM-OLED with a pixel electrode size of ~60x115 micrometers 2, the output current level is equivalent to a pixel current density of 20 mA/cm2. Assuming the OLEDs with an external quantum efficiency of 1%, the AM-OLED brightness of ~88, ~960, and ~160 cd/m2 for red (650 nm), green (540 nm), and blue (480 nm) light emission, respectively, can be achieved with this type of pixel electrode circuits.
Standardization of hanging protocols using the unified modeling language
Daniel J. Valentino, Jack Wei, Robert A. Fiske, et al.
Diagnostic workstation hanging protocols describe how to lay out radiographic images according to predefined user preferences. In our prior work, we developed the concept of Structured Display Protocols (SDP) that structure the presentation of data according to the diagnostic task. In this work an object-oriented analysis procedure was used to define the workflow (the process model) for a radiology activity as well as the data objects viewed or manipulated in that activity (the data model). Within the workflow there exist specific phases of activity (modes) designed to satisfy specific sub-tasks in the diagnostic process. Some tasks may, or may not, be performed, depending upon the case. These tasks are mapped to mode-specific tools. A critical feature of a SDP is that each viewing mode presents only the data (and tools) relevant to a specific task. The results of each step were represented using specific UML diagrams. From the UML diagrams, structured display protocols were implemented on a publicly available workstation and were then used in routine clinical practice by radiologists and film librarians.
Correction of digitized mammograms to enhance soft display and tissue composition measurement
Xiao Hui Wang, Bin Zheng, Yuan-Hsiang Chang, et al.
The wide dynamic range present in digitized mammographic data, partially resulting from the non-uniform thickness of tissue during breast compression, makes it difficult to find window and level values that are appropriate to display the entire image. Further, this factor combined with the non- linearity of the relationship between density and log exposure, confound attempts to automatically derive tissue composition information directly from uncorrected data. This project attempts to address these issues by making appropriate local image corrections based on the characteristic curves of film and digitizer, as well as on the variations in tissue thickness during breast compression. Subjective comparisons of the display techniques developed in this project, to mammography displays based on local histogram equalization methods to reduce image dynamic range, clearly demonstrate superior performance of the methods presented in this paper. In addition to this subjective observation about image display, we also investigated the possibility of using corrected data to improve the performance of tissue composition measurements. A neural network classifier was developed to use features derived from the volume-corrected histogram of the corrected mammographic data to estimate tissue composition. Results indicate that tissue composition measurements are more highly correlated to radiologists' estimates, when they are derived from corrected images.
Visualization I
icon_mobile_dropdown
Modeling liver motion and deformation during the respiratory cycle using intensity-based free-form registration of gated MR images
In this paper, we demonstrate a technique for modeling liver motion during the respiratory cycle using intensity-based free-form deformation registration of gated MR images. We acquired 3D MR image sets (multislice 2D) of the abdomen of four volunteers at end-inhalation, end-exhalation, and eight time points in between using respiratory gating. We computed the deformation field between the images using intensity-based rigid and non-rigid registration algorithms. The non-rigid transformation is a free-form deformation with B-spline interpolation between uniformly-spaced control points. The transformations between inhalation and exhalation were visually inspected. Much of the liver motion is cranial-caudal translation, and thus the rigid transformation captures much of the motion. However, there is still substantial residual deformation of up to 2 cm. The free-form deformation produces a motion field that appears on visual inspection to be accurate. This is true for the liver surface, internal liver structures such as the vascular tree, and the external skin surface. We conclude that abdominal organ motion due to respiration can be satisfactorily modeled using an intensity-based non-rigid 4D image registration approach. This allows for an easier and potentially more accurate and patient-specific deformation field computation than physics-based models using assumed tissue properties and acting forces.
Extendable application framework for medical visualization and surgical planning
Thomas Jansen, Bartosz von Rymon-Lipinski, Zdzislaw Krol, et al.
This paper introduces an extendable cross-platform software framework Julius for medical visualization and surgical planning, consisting of two conceptual layers: the Julius Software Development Kit (JSDK) and the Julius Graphical User Interface (JGUI). The JSDK can be used stand-alone to speed up development of research tools. While the JGUI acts like a front end for the JSDK and offers easy handling combined with time-saving functionality to increase performance and productivity. Julius features a modular, cross-platform design and comes with a full set of components, like semi-automatic segmentation, registration, visualization and navigation.
Comparison of an incremental versus single-step retraction model for intraoperative compensation
Leah A. Platenik, Michael I. Miga, David W. Roberts, et al.
Distortion between the operating field and preoperative images increases as image-guided surgery progresses. Retraction is a typical early-stage event that causes significant tissue deformation, which can be modeled as an intraoperative compensation strategy. This study compares the predictive power of incremental versus single-step retraction models in the porcine brain. In vivo porcine experiments were conducted that involved implanting markers in the brain whose trajectories were tracked in CT scans following known incremental deformations induced by a retractor blade placed interhemispherically. Studies were performed using a 3D consolidation model of brain deformation to investigate the relative predictive benefits of incremental versus single-step retraction simulations. The results show that both models capture greater than 75% of tissue loading due to retraction. We have found that the incremental approach outperforms the single-step method with an average improvement of 1.5%-3%. More importantly it also preferentially recovers the directionality of movement, providing better correspondence to intraoperative surgical events. A new incremental approach to tissue retraction has been developed and shown to improve data-model match in retraction experiments in the porcine brain. Incremental retraction modeling is an improvement over previous single- step models, which does not incur additional computational to overhead. Results in the porcine brain show that even when the overall displacement magnitudes between the two models are similar, directional trends of the displacement field are often significantly improved with the incremental method.
Real-time simulation and visualization of volumetric brain deformation for image-guided neurosurgery
Matthieu Ferrant, Arya Nabavi, Benoit M. M. Macq, et al.
During neurosurgery, the challenge for the neurosurgeon is to remove as much as possible of a tumor without destroying healthy tissue. This can be difficult because healthy and diseased tissue can have the same visual appearance. To this aim, and because the surgeon cannot see underneath the brain surface, image-guided neurosurgery systems are being increasingly used. However, during surgery, deformation of the brain occurs (due to brain shift and tumor resection), therefore causing errors in the surgical planning with respect to preoperative imaging. In our previous work, we developed software for capturing the deformation of the brain during neurosurgery. The software also allows preoperative data to be updated according to the intraoperative imaging so as to reflect the shape changes of the brain during surgery. Our goal in this paper was to rapidly visualize and characterize this deformation over the course of surgery with appropriate tools. Therefore, we developed tools allowing the doctor to visualize (in 2D and 3D) deformations, as well as the stress tensors characterizing the deformation along with the updated preoperative and intraoperative imaging during the course of surgery. Such tools significantly add to the value of intraoperative imaging and hence could improve surgical outcomes.
Presenting 3D MRI data in visual perspective
Haiying Liu, Chialei Chin
To improve depth perception a projection scheme was developed, which incorporates visual perspective transformation in the MR image reconstruction, in which an observation point (OP) was selected in space relative to the image volume. The image formation process was modeled as a camera, whose lens center (LC) coincide with the OP, and a picture plane (PP) was placed at the focal plane. The finite sized PP corresponded to what the observer saw. Each pixel in the PP defined a ray projection trajectory (PT) from LC. The intensity values for a set of discrete points along the PT were interpolated from the image volume. From these projection profiles, various projection images can be derived depending how the intensity profile is manipulated. To demonstrate the effectiveness of the method, we obtained three dimensional image data sets, which included anatomical (TR/TE/flip=13/6/9) and angiographical data sets (TR/TE/Flip=30/5.4/15) in transverse orientation from a patient head at 1.5T. Both quasi-surface rendering and MaxIP algorithms were used along the viewing trace of perspective projection than rather the parallel projection. For multiple Ops with different distances from the 3D image volume, consecutive views were obtained at a constant interval azimuthally. When displayed in cine-mode, the MaxIP images appeared realistic with improved depth perception, different portions of the image moved naturally according to the expectation of human visual system.
Development of fluoroscopic registration in spinal neuronavigation
Hamid Reza Abbasi M.D., Robert Grzeszczuk, Shao Chin, et al.
We present a system involving a computer-instrumented fluoroscope for the purpose of 3D navigation and guidance using pre-operative diagnostic scans as a reference. The goal of the project is to devise a computer-assisted tool that will improve the accuracy, reduce risk, minimize the invasiveness, and shorten the time it takes to perform a variety of neurosurgical and orthopedic procedures of the spine. For this purpose we propose an apparatus that will track surgical tools and localize them with respect to the patient's 3D anatomy and pre-operative 3D diagnostic scans using intraoperative fluoroscopy for in situ registration and localization of embedded fiducials. Preliminary studies have found a fiducial registration error (FRE) of 1.41 mm and a Target Localization Error (TLE) of 0.48 mm. The resulting system leverages equipment already commonly available in the operating room (OR), providing an important new functionality that is free of many current limitations, such as the inadequacy of skin fiducials for spinal neuronavigation, while keeping costs contained.
Visualization II
icon_mobile_dropdown
Photorealistic texture mapping for voxel-based volume data
Tzu-Lun Weng, Wen-Yan Chang, Shyh-Roei Wang, et al.
In computerized image and graphic applications, texture mapping is one of the most commonly used methods to improve the realism or to enhance the visual effect of object rendering without too much increase in computational complexity. In conventional texture mapping, a 3D object has to be transferred to the polygonal structure, and then mapped with a texture or photographic image. However, it is usually computationally expensive to transfer the original data in voxels to polygons, and is even more complex to map the 2D texture image onto the 3D polygonal structure. Even though the computation is huge, the polygonal structure is not efficient in preserving the internal information of volume data. Because most volume data acquired by medical imaging devices or 3D scanners are in voxel format, it is more appropriate to handle these data directly in voxel format. In this paper, we propose a new texture mapping method based on chain-coding flattening to handle the voxel- based data directly. Therefore, the computation is reduced significantly and the internal information can be utilized and preserved thoroughly. The method flattens a 3D object surface onto a 2D plane and then uses 2D warping technique to generate the correspondences between the object surface and the texture image. Therefore, polygonal transformation required in the conventional approach is no longer necessary and texture mapping is handled with inexpensive 2D computation. Experimental results have shown the effectiveness and efficiency of the proposed algorithm.
3D MR imaging in real time
A system has been developed to produce live 3D volume renderings from an MR scanner. Whereas real-time 2D MR imaging has been demonstrated by several groups, 3D volumes are currently rendered off-line to gain greater understanding of anatomical structures. For example, surgical planning is sometimes performed by viewing 2D images or 3D renderings from previously acquired image data. A disadvantage of this approach is misregistration which could occur if the anatomy changes due to normal muscle contractions or surgical manipulation. The ability to produce volume renderings in real-time and present them in the magnet room could eliminate this problem, and enable or benefit other types of interventional procedures. The system uses the data stream generated by a fast 2D multi- slice pulse sequence to update a volume rendering immediately after a new slice is available. We demonstrate some basic types of user interaction with the rendering during imaging at a rate of up to 20 frames per second.
3D modeling and segmentation of diffusion weighted MRI data
Leonid Zhukov, Ken Museth, David E. Breen, et al.
Diffusion weighted magnetic resonance imaging (DW MRI) is a technique that measures the diffusion properties of water molecules to produce a tensor-valued volume dataset. Because water molecules can diffuse more easily along fiber tracts, for example in the brain, rather than across them, diffusion is anisotropic and can be used for segmentation. Segmentation requires the identification of regions with different diffusion properties. In this paper we propose a new set of rotationally invariant diffusion measures which may be used to map the tensor data into a scalar representation. Our invariants may be rapidly computed because they do not require the calculation of eigenvalues. We use these invariants to analyze a 3D DW MRI scan of a human head and build geometric models corresponding to isotropic and anisotropic regions. We then utilize the models to perform quantitative analysis of these regions, for example calculating their surface area and volume.
T-shell rendering
Previously shell rendering was shown to be an ultra fast method of rendering digital surfaces represented as sets of voxels. The purpose of this work is to describe a new extension to the shell rendering method that creates isosurfaces called t-shells using triangulated shell elements. The great speed of the shell rendering technique is made available through the use of data structures that describe the shell and through algorithms that traverse this information to produce the 2D projection of the 3D data. In traditional shell rendering, each shell element is a triple comprised of the offset from the start of the row, the neighborhood code, and the surface normal. We modify this data structure by replacing the neighborhood code with a code that indicates the configuration of triangles within that area. The t-shell algorithm modifies the original shell rendering algorithm to project 1 of the 256 possible triangulated configurations (rather than the rasterization of a single, uniform shell element). We present the general t-shell algorithm as well as the results of two preliminary implementations as applied to input data which consist of different object from various parts of the body and various modalities with a variety of surface sizes and shapes. We present the results of some initial timing experiments as well as some preliminary, sample renditions.
Real-time 3D ultrasound imaging on a next-generation media processor
Niko Pagoulatos, Frederic Noraz, Yongmin Kim
3D ultrasound (US) provides physicians with a better understanding of human anatomy. By manipulating the 3D US data set, physicians can observe the anatomy in 3D from a number of different view directions and obtain 2D US images that would not be possible to directly acquire with the US probe. In order for 3D US to be in widespread clinical use, creation and manipulation of the 3D US data should be done at interactive times. This is a challenging task due to the large amount of data to be processed. Our group previously reported interactive 3D US imaging using a programmable mediaprocessor, Texas Instruments TMS320C80, which has been in clinical use. In this work, we present the algorithms we have developed for real-time 3D US using a newer and more powerful mediaprocessor, called MAP-CA. MAP-CA is a very long instruction word (VLIW) processor developed for multimedia applications. It has multiple execution units, a 32-kbyte data cache and a programmable DMA controller called the data streamer (DS). A forward mapping 6 DOF (for a freehand 3D US system based on magnetic position sensor for tracking the US probe) reconstruction algorithm with zero- order interpolation is achieved in 11.8 msec (84.7 frame/sec) per 512x512 8-bit US image. For 3D visualization of the reconstructed 3D US data sets, we used volume rendering and in particular the shear-warp factorization with the maximum intensity projection (MIP) rendering. 3D visualization is achieved in 53.6 msec (18.6 frames/sec) for a 128x128x128 8-bit volume and in 410.3 msec (2.4 frames/sec) for a 256x256x256 8-bit volume.
Virtual Reality
icon_mobile_dropdown
Flat-panel cone-beam CT: a novel imaging technology for image-guided procedures
Jeffrey H. Siewerdsen, David A. Jaffray, Gregory K. Edmundson, et al.
The use of flat-panel imagers for cone-beam CT signals the emergence of an attractive technology for volumetric imaging. Recent investigations demonstrate volume images with high spatial resolution and soft-tissue visibility and point to a number of logistical characteristics (e.g., open geometry, volume acquisition in a single rotation about the patient, and separation of the imaging and patient support structures) that are attractive to a broad spectrum of applications. Considering application to image-guided (IG) procedures - specifically IG therapies - this paper examines the performance of flat-panel cone-beam CT in relation to numerous constraints and requirements, including time (i.e., speed of image acquisition), dose, and field-of-view. The imaging and guidance performance of a prototype flat panel cone-beam CT system is investigated through the construction of procedure-specific tasks that test the influence of image artifacts (e.g., x-ray scatter and beam-hardening) and volumetric imaging performance (e.g., 3D spatial resolution, noise, and contrast) - taking two specific examples in IG brachytherapy and IG vertebroplasty. For IG brachytherapy, a procedure-specific task is constructed which tests the performance of flat-panel cone-beam CT in measuring the volumetric distribution of Pd-103 permanent implant seeds in relation to neighboring bone and soft-tissue structures in a pelvis phantom. For IG interventional procedures, a procedure-specific task is constructed in the context of vertebroplasty performed on a cadaverized ovine spine, demonstrating the volumetric image quality in pre-, intra-, and post-therapeutic images of the region of interest and testing the performance of the system in measuring the volumetric distribution of bone cement (PMMA) relative to surrounding spinal anatomy. Each of these tasks highlights numerous promising and challenging aspects of flat-panel cone-beam CT applied to IG procedures.
Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
Toward a virtual environment for biomechanical simulation
Peter Zerfass, Erwin Keeve
In this paper we present an extendable framework fo interactive biomechanical simulation and surgical planning. Three dimensional reconstructions of patient specific data are visualized and the elastic properties of individual anatomical structures are modeled. To avoid interpenetration of the virtual objects fast collision detection has been implemented. The kinematics are controlled by a special haptic interface which provides force feedback to the user. The ongoing work will lead to an entire system for physical-based biomechanical simulation and pre-operative visualization of surgical outcomes.
Simulation and virtual-reality visualization of blood hemodynamics: the virtual aneurysm
Daren Lee, Daniel J. Valentino, Gary R. Duckwiler M.D., et al.
Intracranial aneurysms are the primary cause of non- traumatic subarachnoid hemorrhage. Difficulties in identifying which aneurysms will grow and rupture arise because the physicians lack important anatomic and hemodynamic information. Through simulation, this data can be captured, but visualization of large simulated data sets becomes cumbersome and tedious, often resulting in visual clutter and ambiguity. TO address these visualization issues, we developed an automated algorithm that decomposes the patterns of 3D, unsteady blood flow into behavioral components to reduce the visual complexity while retaining the structure and information of the original data. Our structural approach analyzes sets of pathlines and groups them together based on spatial locality and shape similarity. Adaptive thresholding is used to refine each component grouping to obtain the largest and tightest cluster. These components can then be visualized individually or superimposed together to formulate a rich understanding of the flow patterns in the aneurysm.
Calibration of projection parameters in the varioscope AR, a head-mounted display for augmented-reality visualization in image-guided therapy
Wolfgang Birkfellner, Michael Figl, Klaus Huber, et al.
Computer-aided surgery (CAS), the intraoperative application of biomedical visualization techniques, appears to be one of the most promising fields of application for augmented reality (AR), the display of additional computer generated graphics over a real-world scene. Typically a device such as a head-mounted display (HMD) is used for AR. However, considerable technical problems connected with AR have limited the intraoperative application of HMDs up to now. One of the difficulties in using HMDs is the requirement for a common optical focal plane for both the real-world scene and the computer generated image, and acceptance of the HMD by the user in a surgical environment. In order to increase the clinical acceptance of AR, we have adapted the Varioscope (Life Optics, Vienna), a miniature, cost- effective head-mounted operating microscope, for AR. In this work, we present the basic design of the modified HMD, and the method and results of an extensive laboratory study for photogrammetric calibration of the Varioscope's computer displays to a real-world scene. In a series of sixteen calibrations with varying zoom factors and object distances, mean calibration error was found to be 1.24+/- 0.38 pixels or 0.12+/- 0.05 mm for a 640 x 480 display. Maximum error accounted for 3.33+/- 1.04 pixels or 0.33+/- 0.12 mm. The location of a position measurement probe of an optical tracking system was transformed to the display with an error of less than 1 mm in the real world in 56% of all cases. For the remaining cases, error was below 2 mm. We conclude that the accuracy achieved in our experiments is sufficient for a wide range of CAS applications.
Poster Session
icon_mobile_dropdown
Progressive fast volume rendering for medical images
Keun Ho Kim, Hyun Wook Park
There are various 3D visualization methods such as volume rendering and surface rendering. The volume rendering (VR) is a useful tool to visualize 3D medical images. However, a requirement of large computation amount makes it difficult for the VR to be used in real-time medical applications. In order to overcome the large computation amount of the VR, we have developed a progressive VR (PVR) method that can perform the low-resolution VR for fast and intuitive processing and use the depth information from the low- resolution VR to generate the full resolution VR image with a reduced computation time. The developed algorithm can be applicable to the real-time applications of the VR, i.e., the low-resolution VR is performed interactively according to change of view direction, and the full-resolution VR is performed once we fix the view direction. In this paper, its computation complexity and image quality are analyzed. Also an extension of its progressive refinement is introduced.
Optimizing softcopy display of radiographic images acquired with a prototype flat-panel detector
Walter Huda, Kent M. Ogden, Ernest M. Scalzetti, et al.
This study optimized softcopy display of digital radiographic images acquired using a prototype flat panel detector. Six look up table (LUT) shapes were evaluated, consisting of linear, logarithmic, exponential, 1-exp, sigmoidal, and reverse sigmoidal. Representative digital radiographs covering five body regions (skull, neck, chest, knee and foot) were reviewed on a monitor. Images were assessed on a scale of 1 to 10, with a score of 1 indicating an uninterpretable examination, and a score of 10 indicating a perfect image. The difference between the final and initial image quality scores were ((Delta) ), which corresponds to the improvement achievable by the selected LUT. Major improvements in image quality were achieved using LUTs of the type 1-exp. A comprehensive analysis was made of four versions of the 1-exp LUT applied to forty clinical images. Use of the one version of this 1-exp LUT resulted in the best achievable image quality in 95% of the cases (38/40), with an average (Delta) score of 3.4. These results demonstrate that a logarithmic style LUT can significantly improve image quality in comparison to linear LUTs. Of particular importance was the fact that a single LUT achieved excellent image quality for a broad range of clinical radiographs.
Evaluation of accuracy in frame-based versus fiducial-based registration for stereotaxy in Parkinson's deep electrode implantation
Hamid Reza Abbasi M.D., Sanaz Hariri, Jeffrey Lee M.D., et al.
After several years of levodopa treatment, patients with Parkinson's Disease (PD) can develop difficult-to-control motor fluctuations and levodopa-induced dyskinesias (LID). Surgical options for these medically intractable PD patients include deep nucleus lesioning and stimulation. Because it is adjustable and reversible, deep brain stimulations (DBS) is preferable to ablative procedures. Traditionally, frame- based stereotaxy has been used to register these patients during deep electrode implantation. This study investigated the accuracy of the less invasive frameless registration method in 9 patients and found an overall mean error of 1.9mm (range: 1.1mm min, 2.7mm max) with an overall SD of 0.7mm. This error range is not acceptable for the submillimeter precision needed in microelectrode implantation. The lab is currently investing the accuracy of the frameless bone-screw marker method that is still less invasive and cumbersome than the frame-based system.
Three-dimensional visualization and navigation tool for diagnostic and surgical planning applications
Francesco Beltrame, Gianluca DeLeo, Marco Fato, et al.
This study aims at providing the radiologist and the surgeon with a diagnostic and planning tool. To this end multimodal (T1, T2 and PD-weighted) sets of MR images representing a human head and a human knee and without neoplastic formations were acquired. All the software was developed in C++ language using Open Graphics Library (OpenGL) and OpenGL Volumizer. It was tested on a Silicon Graphics O2 workstation. The medical user can rotate along the x-y-z axes the volume under investigation and zoom in and out the data, can make cuts of the set of images in all directions and display volume intersections with the three conventional anatomical planes. By enfolding the volume in a cube and by moving its apexes, the user can dig the volume. The surfaces of the anatomical districts can be visualized. The tool renders a composite volumetric image by using the false-coloring technique and it can combine morphological information of the surface and data about the nature of the volume by using the different distribution of the intensity levels of the pixels. It is also possible to set transparency to obtain an image representing simultaneously the 3D volume and its internal structure. The tool can display surface information and volume information at the same time and provides endo-navigation facility that helps the user to move into an anatomical district in order to find the correct position of potential lesions and the way to remove them.
3DVIEWNIX-AVS: a software package for separate visualization of arteries and veins in CE-MRA images
Tianhu Lei, Jayaram K. Udupa, Dewey Odhner, et al.
Our earlier study developed a computerized method, based on fuzzy connected object delineation principles and algorithms, for artery and vein separation in CE-MRA images. This paper reports its current development - a software package - for the routine clinical use. The software package, termed 3DVIEWNIX-AVS, consists of the following major operational parts: 1)converting data from DICOM3 to 3DVIEWNIX format, 2) previewing slices/creating VOI and MIP shell, 3) segmenting vessel, 4) separating artery and vein, 5) shell rendering vascular structures and creating animations. This package has been applied to EPIX Medical Inc's CE-MRA data (AngioMark MS-325). 133 original CE-MRA data sets (of 52 patients) from 6 hospitals have been processed. In all case studies, unified parameter settings produce correct artery/vein separation. The current package is running on a Pentium PC under Linux and the total operation time per study is about 10 minutes. The strengths of this software package are its 1) minimal user interaction, 2) minimal anatomic knowledge requirements on human vascular system, 3) clinically required speed, 4) free entry to any operational stages, 5) reproducible, reliable, high quality of results, and 6) cost effective computer implementation. To date, it seems to be the only software package (using an image processing approach) available for artery and vein separation for the routine use in a clinical setting.
Acceptance testing for softcopy displays
Thomas Mertelmeier, Peter Scharl
We report on the German standardization activities for acceptance testing of medical soft copy displays. The goal is to assure image quality of imaging systems in radiology considering that the display device is part of the imaging system consisting furthermore of the image acquisition system, of the human visual systems (HVS), and of the ambient light conditions. We analyze the properties of the HVS with respect to soft copy reading. The contrast sensitivity gives a measure for the display's spatial resolution that should be aimed at. The contrast ratio of maximum to minimum luminance is limited by the adaptation process in realistic complex radiological images. Furthermore, the nonlinear behavior of the HVS requires to establish a certain display function to provide a tone scale that is approximately perceptually linear. These HVS properties have to be compared with typical electronic display parameters and with conventional film/screen images under realistic conditions. As a result we end up with recommendations for the display size, for spatial resolution, for luminance, and for contrast ratio to be fulfilled by display systems. These parameters among others are subject to acceptance testing. We describe the classification of displays into application categories, the test equipment, and the test procedures that can and shall be applied in clinical practice. In particular, we analyze the luminance measurement proposed by the acceptance testing standard.
Evaluation of viewing methods for magnetic resonance images
Oliver Kuederle, M. Stella Atkins, Kori M. Inkpen, et al.
Medical images are increasingly being examined on computer monitors. In contrast to the traditional film viewbox, the use of computer displays often involves a trade-off between the number and size of images shown and the available screen space. This paper focuses on two solutions to this problem: the thumbnail technique and the detail-in-context technique. The thumbnail technique, implemented in many current commercial medical imaging systems, presents an overview of the images in a thumbnail bar while selected images are magnified in a separate window. Our earlier work suggested the use of a detail-in-context technique which displays all images in one window utilizing multiple magnification levels. We conducted a controlled experiment to evaluate both techniques. No significant difference was found for performance and preference. However, differences were found in the interaction patterns and comments provided by the participants. The detail-in-context technique accommodated many individual strategies and offered good capabilities for comparing different images whereas the thumbnail technique strongly encouraged sequential examination of the images and allowed for high magnification factors. Given the results of this study, our research suggests new alternatives to the presentation of medical images and provides an increased understanding of the usability of existing medical image viewing methods.
Software components for medical image visualization and surgical planning
Yves P. Starreveld, David G. Gobbi, Kirk Finnis, et al.
Purpose: The development of new applications in medical image visualization and surgical planning requires the completion of many common tasks such as image reading and re-sampling, segmentation, volume rendering, and surface display. Intra-operative use requires an interface to a tracking system and image registration, and the application requires basic, easy to understand user interface components. Rapid changes in computer and end-application hardware, as well as in operating systems and network environments make it desirable to have a hardware and operating system as an independent collection of reusable software components that can be assembled rapidly to prototype new applications. Methods: Using the OpenGL based Visualization Toolkit as a base, we have developed a set of components that implement the above mentioned tasks. The components are written in both C++ and Python, but all are accessible from Python, a byte compiled scripting language. The components have been used on the Red Hat Linux, Silicon Graphics Iris, Microsoft Windows, and Apple OS X platforms. Rigorous object-oriented software design methods have been applied to ensure hardware independence and a standard application programming interface (API). There are components to acquire, display, and register images from MRI, MRA, CT, Computed Rotational Angiography (CRA), Digital Subtraction Angiography (DSA), 2D and 3D ultrasound, video and physiological recordings. Interfaces to various tracking systems for intra-operative use have also been implemented. Results: The described components have been implemented and tested. To date they have been used to create image manipulation and viewing tools, a deep brain functional atlas, a 3D ultrasound acquisition and display platform, a prototype minimally invasive robotic coronary artery bypass graft planning system, a tracked neuro-endoscope guidance system and a frame-based stereotaxy neurosurgery planning tool. The frame-based stereotaxy module has been licensed and certified for use in a commercial image guidance system. Conclusions: It is feasible to encapsulate image manipulation and surgical guidance tasks in individual, reusable software modules. These modules allow for faster development of new applications. The strict application of object oriented software design methods allows individual components of such a system to make the transition from the research environment to a commercial one.
Real-time volume rendering of 4D image using 3D texture mapping
Jinwoo Hwang, June-Sic Kim, Jae Seok Kim, et al.
Four dimensional image is 3D volume data that varies with time. It is used to express deforming or moving object in virtual surgery of 4D ultrasound. It is difficult to render 4D image by conventional ray-casting or shear-warp factorization methods because of their time-consuming rendering time or pre-processing stage whenever the volume data are changed. Even 3D texture mapping is used, repeated volume loading is also time-consuming in 4D image rendering. In this study, we propose a method to reduce data loading time using coherence between currently loaded volume and previously loaded volume in order to achieve real time rendering based on 3D texture mapping. Volume data are divided into small bricks and each brick being loaded is tested for similarity to one which was already loaded in memory. If the brick passed the test, it is defined as 3D texture by OpenGL functions. Later, the texture slices of the brick are mapped into polygons and blended by OpenGL blending functions. All bricks undergo this test. Continuously deforming fifty volumes are rendered in interactive time with SGI ONYX. Real-time volume rendering based on 3D texture mapping is currently available on PC.
3D space analysis of dental models
Joon Huang Chuah, Sim Heng Ong, Toshiaki Kondo, et al.
Space analysis is an important procedure by orthodontists to determine the amount of space available and required for teeth alignment during treatment planning. Traditional manual methods of space analysis are tedious and often inaccurate. Computer-based space analysis methods that work on 2D images have been reported. However, as the space problems in the dental arch exist in all three planes of space, a full 3D analysis of the problems is necessary. This paper describes a visualization and measurement system that analyses 3D images of dental plaster models. Algorithms were developed to determine dental arches. The system is able to record the depths of the Curve of Spee, and quantify space liabilities arising from a non-planar Curve of Spee, malalignment and overjet. Furthermore, the difference between total arch space available and the space required to arrange the teeth in ideal occlusion can be accurately computed. The system for 3D space analysis of the dental arch is an accurate, comprehensive, rapid and repeatable method of space analysis to facilitate proper orthodontic diagnosis and treatment planning.
Ubiquitous remote operation collaborative interface for MRI scanners
H. Douglas Morris
We have developed a remote control interface for research class magnetic resonance imaging (MRI) spectrometers. The goal of the interface is to provide a better collaborative environment for geographically dispersed researchers and a tool that can teach students of medical imaging in a network-based laboratory using state-of-the-art MR instrumentation that would not otherwise be available. The interface for the remote operator(s) is now ubiquitous web browser, which was chosen for the ease of controlling the operator interface, the display of both image and text information, and the wide availability on many computer platforms. The remote operator is presented with an active display in which they may select and control most of the parameters in the MRI experiment. The MR parameters are relayed via web browser to a CGI program running in a standard web server, which passes said parameters to the MRI manufacturers control software. The data returned to the operator(s) consists of the parameters used in acquiring that image, a flat 8-bit grayscale GIF representation of the image, and a 16-bit grayscale image that can be viewed by an appropriate application. It is obvious that the utility of this interface would be helpful for researchers of regional and national facilities to more closely collaborate with colleagues across their region, the nation, or the world. And medical imaging students can put much of their classroom discussions into practice on machinery that would not normally be available to them.
High-speed lossless compression for angiography image sequences
Jonathon M.T. Kennedy, Michael Simms, Emma Kearney, et al.
High speed processing of large amounts of data is a requirement for many diagnostic quality medical imaging applications. A demanding example is the acquisition, storage and display of image sequences in angiography. The functional performance requirements for handling angiography data were identified. A new lossless image compression algorithm was developed, implemented in C++ for the Intel Pentium/MS-Windows environment and optimized for speed of operation. Speeds of up to 6M pixels per second for compression and 12M pixels per second for decompression were measured. This represents an improvement of up to 400% over the next best high-performance algorithm (LOCO-I) without significant reduction in compression ratio. Performance tests were carried out at St. James's Hospital using actual angiography data. Results were compared with the lossless JPEG standard and other leading methods such as JPEG-LS (LOCO-I) and the lossless wavelet approach proposed for JPEG 2000. Our new algorithm represents a significant improvement in the performance of lossless image compression technology without using specialized hardware. It has been applied successfully to image sequence decompression at video rate for angiography, one of the most challenging application areas in medical imaging.
Visualization of time-varying MRI data for MS lesion analysis
Conventional methods to diagnose and follow treatment of Multiple Sclerosis require radiologists and technicians to compare current images with older images of a particular patient, on a slic-by-slice basis. Although there has been progress in creating 3D displays of medical images, little attempt has been made to design visual tools that emphasize change over time. We implemented several ideas that attempt to address this deficiency. In one approach, isosurfaces of segmented lesions at each time step were displayed either on the same image (each time step in a different color), or consecutively in an animation. In a second approach, voxel- wise differences between time steps were calculated and displayed statically using ray casting. Animation was used to show cumulative changes over time. Finally, in a method borrowed from computational fluid dynamics (CFD), glyphs (small arrow-like objects) were rendered with a surface model of the lesions to indicate changes at localized points.
Method for in-field evaluation of the modulation transfer function of electronic display devices
Electronic medical display devices require routine monitoring of their resolution characteristics. This paper introduces a methodology for the assessment of resolution (MTF) characteristics of display devices using a photographic-grade CCD camera, and reports initial MTF measurements on a 5 megapixel medical CRT display device at three display resolution settings. The intrinsic performance of the camera was first evaluated. The camera was placed at a 2 cm distance form the display's faceplate and focused on the image plane. For each MTF assessment, the images of two test patterns were captured without moving the camera, each containing a single-pixel-wide, low- modulation (20% positive contrast) horizontal or orthogonal MTFs were deduced by 1) linearizing the data, 2) identifying the angle of the line transition within the images, 3) reprojecting the 2D data along that direction to determine the line spread function (LSF), and 4) Fourier transforming of the LSF. The results demonstrate that the methodology produces reproducible results in the orthogonal directions and is sensitive to the resolution setting of the display device. The instrumentation needed for the method is portable and can easily be utilized in a clinical setting.
Facial surgery simulation using finite element modeling
Chia-Hsiang Wu, Sheng-Jung Hu, Sheng-Che Lin, et al.
We design a 3D facial surgical simulation system, which predicts the patient's post-surgical appearance from his CT volume data. The general steps adopted in this system include data acquisition, image preprocessing, 3D reconstruction, simulation, and visualization. In order to predict surgical results well, we adopt finite element modeling to estimate the simulation outcome. For surgical simulation, we utilize the isoparametric hexahedron finite element model to represent that facial structure manipulated during the surgical operation. Isoparametric hexahedron element is a more flexible and accurate element type than the tetrahedron one, which was used for surgical simulation in other literatures. Experimental results show that the proposed method is able to simulate facial surgery effectively.
Selective contrast enhancement of prostate ultrasound images using sticks with high-level information
Ultrasound image segmentation is challenging due to speckles, depth-dependent signal attenuation, low signal-to- noise ratio, and direction-dependent edge contrast. In addition, transrectal ultrasound (TRUS) prostate images are often corrupted by acoustic shadowing caused by calcifications, bowel gas, protein deposit artifacts, etc., making segmentation difficult. In such cases, traditional edge detection algorithms without adequate preprocessing have limited success. The original sticks algorithm reduces speckles while enhancing contrast. It assumes that in a pixel neighborhood, reflectors of different orientations with respect to the incident ultrasound beam are equally likely, which is not the cast in practice. Even though some variations of the original sticks algorithm estimate poor probabilities from the image or from the imaging process, no high-level information about the geometry of the object of interest is utilized. As a result, both non-prostate structures and the true boundaries are equally enhanced. This paper presents an extension to the original sticks algorithm, which incorporates high-level knowledge of prostate shape to selectively enhance the prostate edge contrast while suppressing non-prostate structures. The improved algorithm shows that this extension preserves the prostate boundaries while providing superior noise reduction especially in the interior prostate region, which can lead to more accurate segmentation of the prostate.
Probability of cancer detection with optimized prostate biopsy protocols
Jianchao Zeng, Ariela Sofer, John J. Bauer, et al.
What is the maximal possibility that a physician can detect a prostate cancer, given it is there? This research explores this issue from a statistical point of view and evaluates the theoretical results experimentally. We have collected 300 prostate specimens each with clinically localized cancers. We have reconstructed 300 computerized 3D prostate models from these collected prostate specimens. What we will present here is an innovative study we have done recently using the above 300 3D prostate models: First, a 3D prostate cancer distribution atlas has been built by mapping the 300 individual prostate models. Optimal biopsy protocols have then been developed based on the 3D cancer distribution atlas using nonlinear optimization techniques. By then, we have known, in theory, the maximal possible detection rate of prostate cancer with optimized biopsy. Finally, to experimentally evaluate the developed optimal biopsy protocols, a new generation of image-guided prostate biopsy system is being developed by dynamically fusing the optimal biopsy protocols as well as the 3D cancer atlas with the ultrasound images during in vivo biopsies. A physician performing the needle biopsy on a live patient using the developed image-guided prostate biopsy system will have a significantly improved understanding on where the biopsies should be placed, leading to an improved performance in terms of cancer detection.
Accurate measurements in volume data
Javier Olivan, Marco K. Bosma, Jaap Smit
An algorithm for very accurate visualization of an iso- surface in a 3D medical dataset has been developed in the past few years. This technique is extended in this paper to several kinds of measurements in which exact geometric information of a selected iso-surface is used to derive volume, length, curvature, connectivity and similar geometric information from an object of interest. The actual measurement tool described in this paper is fully interactive. The highly accurate iso-surface volume- rendering algorithm is used to describe the actual measurement that should be performed. For instance, objects for which volumes should be calculated, or paths from which the length should be calculated can be selected at sub-voxel resolution. Ratios of these quantities can be used to automatically detect anomalies in the human body with a high degree of confidence. The actual measurement tool uses a polygon-based algorithm that can distinguish object connectivity at sub-voxel resolution, in exactly the same manner as the iso-surface algorithm. Segmentation based on iso-surfaces geometrical topology can be done at this point. The combination of the iso-surface volume-rendering algorithm and the polygon-based algorithm makes it possible to achieve both visual interaction with the dataset and highly accurate measurements. We believe that the proposed method contributes to the integration of visual and geometric information and is helpful in clinical diagnosis.
Multiprocessor iso-surface volume rendering
Mark J.S. van Doesburg, Jaap Smit
The rendering of iso-surfaces in a scalar 3D dataset can be performed with a new algorithm, called iso surface volume rendering. This algorithm does not introduce sampling artifacts or artifacts due to triangularization. The risk to skip very small details by insufficient re-sampling is also eliminated. Another advantage is its speed compared to conventional volume rendering. So far we achieved speeds in the order of ten frames per second on advanced CPU's. The multiprocessor implementation of this new algorithm uses a division of the voxel data into multiple cubes. These cubes are the basis for distributing the workload onto several processors. A scheduler process is running to perform the distribution of the workload. During the distribution of the workload the scheduler also eliminates the need to render invisible parts of the dataset. This reduces the part of the dataset which must be processed to one third of the original dataset for typical applications. Another major advantage of the scheduling algorithm is that the communication overhead is reduced by a factor of ten to twenty, which allows for the efficient use of many processors.
Luminance response calibration using multiple display channels
Michael J. Flynn, Kenneth D. Compton, Aldo Badano
The display of monochrome medical images requires that luminance verus display value be calibrated to provide good contrast at all brightness levels. Industry standards provide a specific grayscale curve for the calibration of display devices. The grayscale standard is derived from human perception data and is significantly different than the intrinsic luminance response of a cathode ray tube. Accurate calibration of a device usually requires that the display values be transformed and converted to a video signal using a 10 or 12 bit digital to analogue converter (DAC). We have developed an alternative method that uses the three 8 bit channels of a color graphic controller to achieve precise calibration. The method is demonstrated using a monitor having a video circuit to combine the red, green, and blue video signals with unequal weighting to form a monochrome video signal. Calibration is achieved by modifying the blue channel (10% video influence). Performance equivalent to an 11 bit DAC is demonstrated. The method is applicable to flat panel display devices.
Three-dimensional rendering in medicine: some common misconceptions
As seen in the medical imaging literature and in the poster presentations at the annual conference of the Radiological Society of North America during the past 10 years, several mis conceptions are held relating to 3D rendering of medical images. The purpose of this presentation is to illustrate and clarify these with medical examples. Most of the misconceptions have to do with a mix up of the issues related to the common visualization techniques, viz., surface rendering (SR) and volume rendering (VR), and methods of image segmentation. In our survey, we came across the following most commonly held conceptions which we believe (and shall demonstrate) are not correct: (1) SR equated to thresholding. (2) VR considered not requiring segmentation. (3) VR considered to achieve higher resolution than SR. (4) SR/VR considered to require specialized hardware to achieve adequate speed. We shall briefly define and establish some fundamental terms to obviate any potential for terminology-related misconceptions. Subsequently, we shall sort out these issues and illustrate with examples as to why the above conceptions are incorrect. There are many SR methods that use segmentations that are far superior to thresholding. All VR techniques (except the straightforward MIP) require some form of fuzzy object specification, that is, fuzzy segmentation. The details seen in renditions depend fundamentally on, in addition to the rendering method, segmentation techniques also. There are fast-software-based rendering methods that give a performance on PCs similar to or exceeding that of expensive hardware systems. Most of the difficulties encountered in visualization (and also in image processing and analysis) stem from the difficulties in segmentation. It is important to identify these and separate them from the issues related purely to 3D rendering.
Performance analysis of algorithms for retrieval of magnetic resonance images for interactive teleradiology
M. Stella Atkins, Robert Hwang, Simon Tang
We have implemented a prototype system consisting of a Java- based image viewer and a web server extension component for transmitting Magnetic Resonance Images (MRI) to an image viewer, to test the performance of different image retrieval techniques. We used full-resolution images, and images compressed/decompressed using the Set Partitioning in Hierarchical Trees (SPIHT) image compression algorithm. We examined the SPIHT decompression algorithm using both non- progressive and progressive transmission, focusing on the running times of the algorithm, client memory usage and garbage collection. We also compared the Java implementation with a native C++ implementation of the non- progressive SPIHT decompression variant. Our performance measurements showed that for uncompressed image retrieval using a 10Mbps Ethernet, a film of 16 MR images can be retrieved and displayed almost within interactive times. The native C++ code implementation of the client-side decoder is twice as fast as the Java decoder. If the network bandwidth is low, the high communication time for retrieving uncompressed images may be reduced by use of SPIHT-compressed images, although the image quality is then degraded. To provide diagnostic quality images, we also investigated the retrieval of up to 3 images on a MR film at full-resolution, using progressive SPIHT decompression. The Java-based implementation of progressive decompression performed badly, mainly due to the memory requirements for maintaining the image states, and the high cost of execution of the Java garbage collector. Hence, in systems where the bandwidth is high, such as found in a hospital intranet, SPIHT image compression does not provide advantages for image retrieval performance.
Model-based multiconstrained integration of invasive electrophysiology with other modalities
Following recent developments, most brain imaging modalities (MR, CT, SPECT, PET) can nowadays be registered and integrated in a manner almost simple enough for routine use. By design though, these modalities are still not able to match the principles and near real-time capabilities of the much simpler (but of lower spatial resolution) EEG, thus the need to integrate it as well, along with - for some patients - the more accurate invasive electrophysiology measurements taken directly in contact with brain structures. A standard control CT (or MR) is routinely performed after the implantation of invasive electrodes. After registration with the other modalities, the initial estimates of the electrodes' locations extracted from the CT (or MR) are iteratively improved by using a geometrical model of the electrodes' arrangement (grids, strips, etc.) And other optional constraints (morphology, etc.). Unlike the direct 3D pointing of each electrode in the surgical suite - which can still act as a complementary approach - this technique estimates the most likely location of the electrodes during monitoring and can also deal with non cortical arrangements (internal strips, depth electrodes, etc.). Although not always applicable to normal volunteers because of its invasive components, this integration further opens the door towards an improved understanding of a very complex biological system.
Web-based home telemedicine system for orthopedics
Christopher Lau, Sean Churchill, Janice Kim, et al.
Traditionally, telemedicine systems have been designed to improve access to care by allowing physicians to consult a specialist about a case without sending the patient to another location, which may be difficult or time-consuming to reach. The cost of the equipment and network bandwidth needed for this consultation has restricted telemedicine use to contact between physicians instead of between patients and physicians. Recently, however, the wide availability of Internet connectivity and client and server software for e- mail, world wide web, and conferencing has made low-cost telemedicine applications feasible. In this work, we present a web-based system for asynchronous multimedia messaging between shoulder replacement surgery patients at home and their surgeons. A web browser plug-in was developed to simplify the process of capturing video and transferring it to a web site. The video capture plug-in can be used as a template to construct a plug-in that captures and transfers any type of data to a web server. For example, readings from home biosensor instruments (e.g., blood glucose meters and spirometers) that can be connected to a computing platform can be transferred to a home telemedicine web site. Both patients and doctors can access this web site to monitor progress longitudinally. The system has been tested with 3 subjects for the past 7 weeks, and we plan to continue testing in the foreseeable future.
Programmable ultrasound scan conversion on a media-processor-based system
Siddhartha Sikdar, Ravi Managuli, Tsuyoshi Mitake, et al.
Scan conversion is an important ultrasonic processing stage that maps the acquired polar coordinate data to Cartesian coordinates for display. This requires computationally expensive square root and arctangent calculations for geometric transformation. Previously, we developed an algorithm for implementing scan conversion for gray-scale images using pre-computed lookup tables. In a clinical setting, however, interactive changes of scan conversion parameters, e.g., zoom and sector angle, require these table to be recomputed often. In this paper, we describe a fast lookup table generation algorithm and its implementation on Hitachi/Equator's MAP-CA mediaprocessor architecture. In addition, we have extended the gray-scale scan conversion algorithm for color images, which requires interpolation between angular data. For a 600x420 output image, gray- scale scan conversion takes 12 ms while color scan conversion takes 20.3 ms on a 300 MHz MAP-CA. Interactive parameter changes take 102.5 ms for table regeneration. We believe that this high performance is an important step towards making software-based ultrasound programmable systems using mediaprocessors a reality. Such a system would provide more flexibility and improved cost/performance in the future than the existing hardwired solutions.
Teleconsultation in MR-guided neurosurgery
Keyvan Farahani, Gregory Rubino M.D., Pablo Villablanca M.D., et al.
MR-guided neurosurgery may offer greater accuracy in surgical localization and resection of brain tumors. Proper utilization of MRI for surgical guidance requires real-time consultation with a neuroradiologist during typically lengthy procedures. We sought to build a system that allows fast and efficient tele-radiologic consultation in the MR- surgical suite. We modeled the imaging tasks and data flow for representative MR-guided neurosurgical procedures. Customized viewing modes, which associate specific data and tasks, were designed accordingly. We implemented application sharing in order to allow teleconsultation between the surgeon and a remotely located radiologist during a case. The system described here provides an effective and efficient method for expert tele-consultation during MR-guided neurosurgery.
Multimodality image integration for radiotherapy treatment: an easy approach
Andres Santos, Javier Pascau, Manuel Desco, et al.
The interest of using combined MR and CT information for radiotherapy planning is well documented. However, many planning workstations do not allow to use MR images, nor import predefined contours. This paper presents a new simple approach for transferring segmentation results from MRI to a CT image that will be used for radiotherapy planning, using the same original CT format. CT and MRI images of the same anatomical area are registered using mutual information (MI) algorithm. Targets and organs at risk are segmented by the physician on the MR image, where their contours are easy to track. A locally developed software running on PC is used for this step, with several facilities for the segmentation process. The result is transferred onto the CT by slightly modifying up and down the original Hounsfield values of some points of the contour. This is enough to visualize the contour on the CT, but does not affect dose calculations. The CT is then stored using the original file format of the radiotherapy planning workstation, where the technician uses the segmented contour to design the correct beam positioning. The described method has been tested in five patients. Simulations and patient results show that the dose distribution is not affected by the small modification of pixels of the CT image, while the segmented structures can be tracked in the radiotherapy planning workstation-using adequate window/level settings. The presence of the physician is not requires at the planning workstation, and he/she can perform the segmentation process using his/her own PC. This new approach makes it possible to take advantage from the anatomical information present on the MRI and to transfer the segmentation to the CT used for planning, even when the planning workstation does not allow to import external contours. The physician can draw the limits of the target and areas at risk off-line, thus separating in time the segmentation and planning tasks and increasing the efficiency.
Real-time freehand 3D ultrasound system for clinical applications
Jacqueline Nerney Welch, Jeremy A. Johnson, Michael R. Bax, et al.
The goal of the Image Guidance Laboratories (IGL) is to provide the highest quality, real-time visualization of 3D ultrasound images in a manner that is most acceptable and suited for adoption by surgeons. To this end, IGL has developed an optically tracked, freehand, 3D volume- rendering ultrasound system for image-guided surgery. Other systems temporally separate frame acquisition from volume construction and display; the data must be stored before being loaded into the volume construction engine for visualization. By incorporating novel methods to reduce the computational expense associated with frame insertion, volume maintenance, and 2D texture-based rendering, the IGL system is able to simultaneously acquire and display 3D ultrasound data. The work presented here focuses on methods unique to achieving near real-time 3D visualization using 2D ultrasound images and discusses the potential of this system to address clinical situations such as liver resection, tumor ablation, and breast biopsy.