Light Constructions - Low-bandwidth vision system allows independent robot navigation

From OE Reports Number 183 - March 1999
01 March 1999
Sunny Bains

Navigating around cities is hard enough for humans, never mind robots. However, robotic navigation must be considered in order to build systems (such as smart cars) that can help us find our ways around, or that can explore autonomously. No longer the stuff of science fiction, such technology is being studied at Siemens Corporate Research (East Princeton, NJ), where a navigation system has been developed that may be practical enough for the real world; it's fast, cheap, and can recover when it gets lost. In addition, it doesn't rely on radio signals, beacons, or the satellite-based Global Positioning System (GPS) for guidance. Based on its own experience, it can look around and determine its location, even in the thick of New York City.

The key to the system's robustness is that it does not rely solely on a single sensor or piece of information; it has four major sources of information. The first is an internal skeletal map, with roads, road lengths, and junctions marked. The second is a knowledge of where it is starting from, provided either by the user or by storing its last location in memory. Third is an odometer, which encodes the distance traveled by the vehicle. Finally, the system has a panoramic eye that allows it to see in all directions at once (in the horizontal plane).


Figure 1. Panoramic CCD image taken from the top of a Siemens test vehicle. (Siemens Corporate Research, Inc.)

Figure 2. The core of the navigation system: a 120-byte "strip."

Though Siemens's engineers used a charge-coupled device (CCD) camera in their prototype system, only 120 bytes of information are actually needed. The system uses a strip of 120 pixel values that correspond to the average value of pixels at 120 locations around the panoramic view. This can be understood more clearly by looking at figures 1 and 2. Figure 1 shows an image taken from a CCD camera located on top of a Siemens test vehicle. The image is of the bottom of a mirrored ball (a Christmas tree decoration) that reflects the surrounding view into the camera. The central black area is the reflection of the camera itself. The ring that contains the information about the outside world is unwrapped, cut into 120 chunks, and then each of those chunks is averaged. The result, known as a "strip," is shown in figure 2. Essentially, strips like this, with a little bit of extra processing, are what the entire system is based on.

To make use of this information, the flow of strips taken from various charting expeditions is first analyzed using an algorithm. This algorithm looks for landmarks, but in a very narrow sense. A landmark is something that

  • looks relatively consistent/similar (in terms of strips) over a reasonable distance as it is approached/left behind
  • has relatively consistent scenery in the surrounding area
  • looks very different from the areas around it.

Because of the time taken to process images, these landmarks are also chosen to be reasonably sparse (tens of meters away from each other). These landmarks are then programmed into a set of neural networks.

When the system is used, it weighs all the information at its disposal. First, it performs dead-reckoning calculations based on odometer readings and any turns that have been performed. (Turns are determined by looking at consecutive strips and determining whether they are just horizontally shifted versions of each other. If they are, then the amount of shift shows the amount of turning.) Based on the dead-reckoning, the vehicle approximates where it should be on the map and looks for landmarks that are within a short distance of that point. How large this area of interest is (in map terms) is determined by how confident the device is that the hypothetical location is correct. This, in turn, is determined by how well various pieces of information agree.

Incoming data about the nearby landmarks is then fed into the neural networks. The strength of matches is used to reinforce or weaken the machine's theories on where it might be. When the confidence level for a particular location reaches a certain threshold, the navigation system asserts this location and gives directions accordingly.

The system has been tested on a few kilometers of local highways, suburban roads, and in Manhattan. Researchers found that the location-accuracy of the system varied between about 15 m and 50 m (generally better than standard GPS) and that it could find its correct location despite faulty odometer readings, incorrect starting information, or despite wandering into uncharted territory and then returning. It was successful both day and night and in different seasons (snowy conditions were not tested), although nighttime positioning was at the worse end of the range. At last, researchers have designed low-cost navigation hardware that will preclude the need for a CCD camera and cut out some preprocessing steps.

Reference

1. Long-Ji Lin, Thomas R. Hancock, J. Stephen Judd, A robust landmark-based system for vehicle location using low-bandwidth vision,Robotics and Autonomous Systems 25, pp. 19-32, 1998.


Sunny Bains
Sunny Bains is a scientist and journalist based in the San Francisco Bay Area. http://www.sunnybains.com

Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research