Share Email Print
cover

Proceedings Paper

Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom
Author(s): Calvin R. Maurer; Frank Sauer; Bo Hu; Benedicte Bascle; Bernhard Geiger; Fabian Wenzel; Filippo Recchi; Torsten Rohlfing; Christopher R. Brown; Robert J. Bakos; Robert J. Maciunas; Ali R. Bani-Hashemi
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.

Paper Details

Date Published: 28 May 2001
PDF: 12 pages
Proc. SPIE 4319, Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures, (28 May 2001); doi: 10.1117/12.428086
Show Author Affiliations
Calvin R. Maurer, Univ. of Rochester (United States)
Frank Sauer, Siemens Corporate Research, Inc. (United States)
Bo Hu, Univ. of Rochester (United States)
Benedicte Bascle, Siemens Corporate Research, Inc. (United States)
Bernhard Geiger, Siemens Corporate Research, Inc. (Germany)
Fabian Wenzel, Siemens Corporate Research, Inc. (United States)
Filippo Recchi, Univ. of Rochester (United States)
Torsten Rohlfing, Univ. of Rochester (United States)
Christopher R. Brown, Univ. of Rochester (United States)
Robert J. Bakos, Univ. of Rochester (United States)
Robert J. Maciunas, Univ. of Rochester (United States)
Ali R. Bani-Hashemi, Siemens Corporate Research, Inc. (United States)


Published in SPIE Proceedings Vol. 4319:
Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures
Seong Ki Mun, Editor(s)

© SPIE. Terms of Use
Back to Top