SPIE Digital Library Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Defense + Commercial Sensing 2017 | Call for Papers

Journal of Medical Imaging | Learn more

SPIE PRESS




Print PageEmail Page

Biomedical Optics & Medical Imaging

Light Constructions - Bridging the gap between hand and eye

From OE Reports Number 193 - January 2000
31 January 2000, SPIE Newsroom. DOI: 10.1117/2.6200001.0002

Researchers at the California Institute of Technology (Caltech) are taking an ambitious step toward the cybernetic enhancement of disabled patients. If it is successful, the project could eventually allow people who are paralyzed or have lost limbs to control prostheses using neural implants.

So far, work has concentrated on translating a plan to reach for something, in this case formulated in the mind of a monkey, into a signal that can be acted on by a machine. The next step will be to allow brain-directed movement of a virtual arm (an image of an arm that can be seen on a display). Eventually, researchers hope, users will be able to control a robotic device by simply thinking about how they want it to move.


Figure 1. Prosthetic system concept: where a spinal cord or other link between brain and limb is broken, a neural probe could be used to determine the intention of the subject and use this information to control a prosthetic arm.

A spinal cord injury can prevent the use of a perfectly healthy limb (figure 1); signals from the brain simply cannot reach their destination, and the result is paralysis. There are many other conditions that leave patients similarly disabled, either because they are missing an arm or leg entirely, or because they have sustained damage to one or more links in the chain between limb and brain. Caltech researchers intend to directly probe the relevant parts of the brain using implants. By looking at which neurons are firing and how quickly, they hope to figure out how the patient intends to move, and then to make this happen.1

Probing brains

To attempt the first stage of this project, researchers have had to ask a number of hard questions. In particular, they have had to figure out what part of the brain generates the most useful signals for this kind of control, how to probe that part of the brain, and how to interpret the signals they receive. None of these is an easy problem.

Caltech scientists have been working on the first part of this puzzle for some time. They have determined that the best place to find the brain's plan or intention to reach for something -- as opposed to motion commands specific to the "biological hardware" doing the reaching -- was in an area they dubbed the parietal reach region (PRR). In both humans and monkeys, this is located in the posterior parietal cortex (PPC) of the brain, which in turn is on the trajectory between the visual cortex and frontal lobe. The PPC seems to specialize in planning several different types of movements, especially those that depend heavily on, or are important to, the vision system.


Figure 2. California Institute of Technology researchers are using a Bionic Technologies probe array (right and bottom right) that picks up neural signals (middle). This is implanted in the parietal reach region, located in a fold of the brain between the visual and motor cortex. Probe photos courtesy Bionic Technologies.

In the subarea of the PRR, individual neurons have been found to fire only if a proposed reach involves going through the corresponding positions in 3D space (known as receptive fields). In essence, the neural firing distribution acts as a kind of sketch of the intended movement. The endpoint of the reach is encoded with respect to the current eye position and so changes with visual attention during the reach.2

Having identified the correct brain area, the researchers' next problem was how to get signals out of it. Unfortunately, external methods of probing the brain, like magnetic resonance imaging scans, could not be used because their resolution, both temporal and spatial, is too slow. A physical probe was necessary. Fortunately, the Caltech team was able to find biologically compatible neural implants, made using micromachining techniques, that they customized to fit their requirements. The original probe array, from Bionic Technologies (Salt Lake City, UT), contained 100 electrodes (Figure 2). In the Caltech experiments, this number was cut down to 25, of which only 22 were wired up, thus simplifying signal processing.

Though finding the hardware was easy, placing it inside the brain was much more of a challenge. Because the brain is topologically a flat structure that's "scrunched up" inside the skull, much of it is not immediately accessible because it falls inside one of the folds. This is true of the PRR. To overcome this problem, the surgical team had to invent a new procedure or protocol to insert the probe -- one that involved opening up the brain folds. A probe was successfully inserted into the PRR of a monkey in May 1999, an operation that researchers say may be the first of its type.

Reading minds

A very different, but equally important, problem is that of interpreting the electrical signals picked up by the probe. The first difficulty is that each electrode picks up signals from many different neurons. Traditionally, this kind of source identification problem has been carried out by hand with the experimenter searching for correlations or clusters in various physical parameters. More recently, however, a Caltech PhD student developed a software suite that, based on statistics, both sorts the waveforms and rates its own performance.

Once the individual waveforms, or spike trains, have been isolated, the next issue is to interpret what they mean. This they do by adding up probabilities. For instance, say an electrode receives signals from three neurons (zero to three is typical). The higher the electrical activity for any one neuron, the more likely it is that its receptive field (the area in space it corresponds to) is in the reach trajectory. By summing all the signals based on their likelihood, the intended reach can be pulled out. It turns out that to estimate the reach reasonably accurately, it is only necessary to analyze the signals from about 50 different neurons. This should be possible in real time with the Caltech team's new multichannel acquisition processor, a machine that should allow them to download and process incoming neural data quickly.

After that, the goal is to hook up implanted monkeys and train them to look at a target (a red spot on a screen) and to reach toward a second target (a flashed green spot). In the delay between the brain forming a plan of action and the actual movement of the arm, data from the PRR will be used to move a virtual arm. If it reaches its target, the monkey will be rewarded. Thus, it is hoped, they will be able to train the animals to use their mind alone to perform the virtual reach task.

References:

1. K. V. Shenoy, S. A. Kureshi, D. Meeker, R. A. Andersen, B. L. Gillikin, D. J. Dubowitz, A. P. Batista, C. A. Buneo, S. Cao, J. W. Burdick, Toward prosthetic systems controlled by parietal cortex, http://www.cnse.caltech.edu/Research01/biology.shtml  

2. Aaron P. Batista, Christopher A. Buneo, Lawrence H. Snyder, Richard A. Andersen, Reach Plans in Eye-Centered Coordinates, Science 285, pp. 257-260, 9 July 1999.


Sunny Bains

Sunny Bains is a scientist and writer based in the San Francisco Bay area.