SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:

SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS


Print PageEmail Page

Biomedical Optics & Medical Imaging

Imaging fusion aids brain surgeons

From OE Reports Number 193 - January 2000
31 January 2000, SPIE Newsroom. DOI: 10.1117/2.6200001.0001

Figure 1. MIT's neuro-imaging system overlays, tracks, and registers 3D models of internal brain structures onto real-world video feeds taken during the operation.

What doctor wouldn't trade his or her stethoscope for a pair of glasses that could look inside the human body? Imaging and noninvasive diagnostic techniques make up one of the fastest growing commercial segments of the medical industry. Scientists and engineers are conducting significant research on melding advanced imaging modalities with surgical operations to make surgeons more efficient and cost effective while easing the patient's trauma.

A system under development at MIT in conjunction with Brigham and Women's Hospital brings together 3D accelerated graphics, a Sun workstation, a laser scanner/registration system, and video cameras to register a 3D model generated by Magnetic Resonance Imaging (MRI) with real-time video of a patient's head. Using a special probe, the surgeon can locate, touch, manipulate, and view internal structures of the brain without having to expose the region, which reduces patient trauma and speeds the surgical operation. The intraoperative navigational system has undergone more than a hundred trials during neurosurgery, helping dozens of children and adults to escape epileptic seizures when drug therapies failed to ease their suffering.

Typically, the surgery is conducted in two surgical steps. First an electrode grid is placed directly onto the brain to monitor which areas of the organ are acting abnormally. Technicians record the grid placement on standard forms and record the EEG. Once the foci are identified, the physician cuts off the neural pathways from the foci to the surrounding brain. In some cases, the first surgery can be reduced or eliminated by the use of MRI or CT scanning technology. The 3D models generated from these procedures allow the physician to plan the upcoming surgical procedure.

Figure 2. Before an overlay of MRI 3D modeling can be performed on the real-world video feed, the patient's head position must be determined in real-world coordinates. First, the MIT laser scanner scans the operational area (a). The operation is captured by the Pulnix camera and displayed on the Sun Ultra-10 workstation. The operator then uses the mouse to select the region of interest (ROI) (b), and directs the program to erase all laser points outside that area (c). 3D coordinates are determined for the laser points in the ROI and then a two-step algorithm brings the 3D model data developed by the MRI into registration with the video feed (d). Green points indicate less than 1-mm registration between the MRI and video in real-world coordinates. Once the MRI model and video feed are registered in real-world 3D coordinates, any part of the MRI model -- including the skin (e) -- can be displayed at the same time as the video overlay with registration within 1 mm.

Under the MIT design, each procedure begins with an MRI scan, using a 1.5-Tesla MRI unit from GE Medical Systems connected to a SPARC 20 computer from Sun Microsystems. MRI systems induce RF transmission from external and internal structures through strong magnetic fields. Antennas arranged around the patient in a circle pick up the RF signals in slices. Each slice is 1.5 mm thick. While the system moves forward to the next slice, RF intensity profiles collected during each slice are compared against one another to produce a 2D image. Pixel intensities are eventually generated based on the comparison of these profiles and the fall-off rate of the RF signal over the brief period of time that a slice is collected. Once the intensity values are known, the computer can make judgements of tissue type based on the pixel intensity.

Because some tissues are close to each other in pixel gain, the MRI system at Brigham and Women's Hospital uses an automated segmentation technique that views each pixel intensity within the spatial context of the head or organ, and then compares that against a database of anatomical structures. Pixel by pixel, the 2D image is tagged with various tissue types. In the case of tumors, physicians oversee the segmentation process and use their training to separate lesions from surrounding tissue.

Once the tissue segmentation step is completed, the Sun SPARC 20 workstation takes the 2D images and stacks them to form a 3D model. Once this 3D model is completed, it is transmitted to the Sun Microsystems Ultra-10 workstation that controls the intraoperative navigational system.

The Ultra-10 workstation provides the user interface and controls the navigational display with the help of a Sun Microsystems Creator 3D graphics accelerator card and Sun's LAVA 3D display software. According to MIT researcher Michael Leventon, the majority of the computation is performed in C/C++, while the user interface is written in Sun Microsystems' Tcl/Tk. Kitware's Visualization Toolkit is used to manipulate the anatomical models and to generate the 3D displays.

Imaging systems unite

The system consists of a portable cart containing a Sun Microsytems Ultra-10 workstation and the hardware to drive the FlashPoint optical digitizer with its three linear CCD arrays, a video camera, and the MIT laser scanner. The articulated extendible is mounted on top of the cart. The joint between the arm and scanning bar has three degrees of freedom to allow easy placement of the bar in desired configurations.

The cart is rolled to the table, with the articulating arm approximately 1.5 m above the table. Prior to the operation, the MIT laser scanner is calibrated by scanning an object with known dimensions. Control electronics for the laser scanner and stepper motor are connected to the Ultra-10 workstation through a serial port that constantly feeds motor position data into the computer. A 768 X 484-pixel color progressive scan video camera from Pulnix attached to the Ultra-10 workstation through a Sun Video Plus frame grabber captures the image of the scanned laser plane as it reflects off of the calibration object. By knowing the dimensions of the calibration object, the Ultra-10 is able to triangulate the position of the laser and video camera position in real-world coordinates. Once the arm is in position over the table, the frame grabber captures a picture of the table and surrounding environment for later use.

Figure 3. A specially designed probe developed by MIT is tracked by the Pulnix camera thanks to two LEDs attached along the probe's length. Because the system knows the length of the probe and the location of the LEDs in real-world coordinates, it can extrapolate the position of the probe's tip, even when hidden by tissue. The surgeon's navigation window can show the real-time video (top), probe location within the MRI 3D model (bottom), and tip location in sagital and axial MRI slices (right).

At this point, the patient is placed on the table and a triangular LED marker is attached to a clamp affixed to the patient's skull. The flashing light from the LED marker is constantly tracked by the FlashPoint system through the three linear array CCDs. A pair of linear arrays (1024 X 1 pixels) are positioned along the axis of the arm, while the third array is orthogonal. The arrays, operating at 30 fps, are connected to three separate FlashPoint DSP boards in a 486-MHz computer. A fourth FlashPoint board triggers the triangular LED marker attached to the patient's head. By detecting the position of all three LEDs in the array along both axes, the 486 constantly determines the position of the patient's head in real time and feeds it via a serial cable to the Ultra-10. In this fashion, if the patient should move during the surgery, the change in the positions of the LEDS at each corner of the triangle will allow the Ultra-10 to calculate new reference points for the MRI model overlay.

Next, the laser scanner begins to scan the operational area surrounding the patient's head. From these pictures, the system subtracts the original background image from each frame, leaving only those portions of the picture that have changed -- the patient and the laser lines. The Ultra-10 workstation triangulates the real-world coordinates for the patient's head based on the known positions of the camera and scanner and the contours of the reflected laser light. Once the patient's head, skin, and features are located in real-world space, the operator approximately aligns the MRI model with the real-time video picture of the patient's head. After rough alignment, the Ultra-10 computer evaluates the alignment for submillimeter precision by first approximately aligning the model with the video image, and then through an automated fine-tuning algorithm.

By matching the model coordinates with the real-world coordinates of the patient, a bridge is constructed whereby real-world probes and instruments can be displayed in model coordinates and, of course, internal structures only present in the model can also be displayed in real-time video.

Intraoperative navigation

Although the overhead FlashPoint camera shows the surgeon's hands and instruments, a camera alone cannot allow the surgeon to see where a probe is located inside the brain. For this task, a special LED probe developed by FlashPoint makes use of the bridge between model and real-world coordinates. The probe has two LEDs located along its length, with the probe tip at a set distance from these two lights. By imaging the LEDS with the FlashPoint optical digitizer and converting their position to model coordinates, the probe's position inside the brain can be revealed in real time on the surgical navigation display.

In addition to helping the surgeon plan each move, the probe can be used to "point out" foci within the brain responsible for epilepsy. When requested by the operator, the system will remember the placement of a probe and allow the surgeon to return to that position without the use of external markers or tags.

According to MIT researchers, the intraoperative surgical navigator has performed extremely well, reducing operational time and cost. An early cost/benefit analysis indicates that the system could reduce the cost of neurosurgical operations by $1000 to $5000 per operation, primarily due to reduced operating times.

The system could benefit from several improvements, Leventon added. By including two cameras instead of one during the registration/alignment step, the MIT system could image a larger portion of the patient's face, including facial features such as eyes, nose, and mouth. These features are very helpful in registering the MRI model to the real-time video because, while every patient has these features, their shape and position are unique to each individual. This feature would also allow cranial scanning during the procedure, allowing the system to adjust to brain swelling that naturally occurs when the skull cap is removed. Currently, swelling can introduce coordinate errors of 1 to 2 mm.

Other improvements could include flat-panel displays that could swivel into the surgeon's field of view or headset displays so that he or she would not need to look away from the patient when exploring tissue below the surface. By incorporating a head tracking system for the surgeon into these features, parallax (differences in the virtual camera and surgeon's point of view) could also be eliminated.

Company Info:

Department of Electrical Engineering and Computer Science
MIT 38-401
Cambridge, MA 02139-4307
Phone: (1) 617/253-4600
Fax: (1) 617/258-7354
Web: www-eecs.mit.edu

Sun Microsystems

901 San Antonio Rd.
Palo Alto, CA 94303
Phone: (1) 650/960-1300
Fax: (1) 650/969-9131
Web: www.sun.com

PULNiX America, Inc.

1330 Orleans Dr.
Sunnyvale, CA 94089
Phone: (1) 800/445-5444
Phone outside North America:
(1) 408/747-0300
Fax: (1) 408/747-0880
Web: www.pulnix.com


152 N. Third Street, Suite 800
San Jose, CA 95112
Phone: (1) 408/795-4900
or (1) 888-820-3644
Fax: (1) 408/795-5050

GE Medical Systems

3000 N. Grandview Blvd.
Waukesha, WI 53188
Phone: (1) 414/544-3011
Fax: (1) 414/544-3384
Web: www.ge.com/medical

Kitware, Inc.

469 Clifton Corporate Parkway
Clifton Park, NY 12065
Phone: (1) 518/371-3971
Fax: (1) 518/371-3971
Web: www.kitware.com

R. Winn Hardin

R. Winn Hardin is a science and technology writer based in Jacksonville, FL.