SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Register Today

2017 SPIE Optics + Photonics | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail PageView PDF

Biomedical Optics & Medical Imaging

Explore in 3D: a new virtual image navigation tool

A novel system using a 3D handheld navigation device allows users to explore volumetric imaging data.
1 August 2006, SPIE Newsroom. DOI: 10.1117/2.1200607.0222

Volumetric imaging, like multi-detector computed tomography (MDCT), provides increased anatomical detail over single 2D surfaces or slices. However, it adds complexity to the diagnostic process by generating large amounts of image data to be reviewed. Several techniques have been developed for the visualization of such image data.1 Unfortunately, little effort has been spent on ways to interact with the image display. More sophisticated navigation tools are required to solve the growing problem of image data overload2,3 and to better explore the diagnostic potential of volumetric imaging.

Common display solutions use a keyboard and mouse or trackball as input devices. These are 2D input devices and are limited when it comes to exploring 3D data. An alternative is to use virtual reality (VR) techniques. These provide the user with more degrees of freedom to explore image data in a semi- or fully-immersive environment (e.g. for surgical planning4). However, such systems are still rarely used in routine clinical settings: they are expensive, they often require time-consuming preparation, and, most importantly, they have a high learning curve caused by very unfamiliar user interfaces. The system described here, called ‘virtusMED’,5,6 combines conventional and thus familiar desktop-based image viewing with novel 3D input technology. It is based on relatively inexpensive hardware and software, and image data can be instantly viewed without any manual preprocessing.

The basic idea of virtusMED is to mimic an ultrasound examination in which slice images of the patient's body are generated with a handheld probe. In Figure 1 the user explores a virtual volume, composed of CT or MRI data. Figures 1 left and 2(a) show the full volume data set (reference volume) together with arbitrarily positioned slice images and optional cutting planes. Figures 1 right and 2(b) show the 2D view with one selected slice image as 2D image. With this new system, instead of the traditional 2D computer mouse, a handheld 3D mouse is used. The position and orientation of the 3D mouse in space is used to define the position and orientation of either a slice image (Figure 3) or the whole virtual volume (Figure 4). The system is based on standard PC technology and uses a 3D mouse made of an electromagnetic three-dimensional motion tracking sensor. Lying on the desk, the 3D version functions like a standard mouse and allows for restricted navigation (e.g. parallel movements of a slice or rotation of the virtual volume around two axes).


Figure 1. The principle of virtusMED: the user explores a volumetric image dataset using a 3D mouse for slice-based navigation through a virtual volume.
 

Figure 2. The exploration of MRI head data set: a) 3D view showing the reference volume and both a para-axial and a paracoronal slice, b) 2D view showing only the paracoronal slice, c) 3D view showing a paracoronal slab instead of the slice, visualized with maximum intensity projection (MIP), d) 2D (i.e. frontal) view of the slab.
 

Figure 3. An example of adjusting a slice: a) a para-axial slice has been defined, b-d) an additional parasagittal slice is created by intuitively rotating the hand.
 

Figure 4. Relocating the virtual volume with intuitive hand movements to achieve desired view in a) rotation and b) translation.
 

The volume data set is visualized with a volume rendering technique (VRT), using a threshold-based opacity map. Slices are computed using a multiplanar reformatting (MPR) technique. In addition to slices, slabs with arbitrary thickness can be visualized either with VRT or with maximum, minimum or average intensity projection: see Figures 2(c) and 2(d).

Several radiologists and surgeons have tested the new system. They found it to be an intuitive navigation tool that allows for easy selection of diagnostically relevant (oblique) reformatted slices or slabs and appropriate 3D views. It was considered particularly helpful for the assessment of anatomical structures that don't conveniently lie in orthogonal planes, clinical demonstrations, and pre-operative planning.

We believe virtusMED fills a gap between conventional solutions. One side is limited by the use of 2D input devices. On the other side are the complex and expensive immersive virtual reality systems. A crucial property of the system is that it enables navigation in real-time without the need for any time-consuming or technically demanding pre-processing. In particular, the ability to create oblique MPR images or slabs ‘on-the-fly’, with intuitive hand movements, has great potential to increase the practical diagnostic value of high-resolution volumetric images. Further work is needed to assess the actual clinical value and the significance of fatigue during 3D interaction. While the examples here are related to clinical CT and MRI, the approach has also been applied to other kinds of volumetric images, such as those acquired by 3D ultrasound, confocal microscopy, and those from the National Library of Medicine's Visible Human Project.

This work has progressed thanks to many fruitful discussions, in particular with radiologists and surgeons. Special thanks go to D. P. Pretschner, O. J. Bott, J. Dormeier, T. Lison (Technical University Braunschweig, Germany), A. Aziz, W. L. Nowinski (Singapore BioImaging Consortium), R. S. Breiman (University of California, San Francisco), J. A. Brunberg (University of California, Davis), K. Dresing (University Hospital, Goettingen, Germany), T. Pohlemann (University Hospital Homburg, Germany), and Y. Rado (University Hospital Duesseldorf, Germany) for supporting this work and providing critical feedback.


Author
Michael Teistler
Biomedical Imaging Lab, Agency for Science, Technology and Research (A*STAR)
Singapore
 
Dr. Teistler is a research fellow at the Biomedical Imaging Lab of the Agency for Science, Technology and Research (A*STAR), Singapore. He was previously with the Institute for Medical Informatics at the University Braunschweig, Germany. His research interests include medical informatics, imaging, human-computer interaction, virtual reality and visualization.