The intelligent flying eye

A micro air vehicle with onboard intelligence can maneuver in hard-to-reach parts of disaster areas and provide video images for search-and-rescue operations.
13 January 2011
Lorenz Meier, Friedrich Fraundorfer and Marc Pollefeys

Search-and-rescue operations can be very difficult. Consider as a typical scenario a partially collapsed building that may contain trapped survivors. But those people are surrounded by debris that presents a formidable obstacle to searchers. How do you find the survivors, and how do you get them out safely?

The most important task in such a situation is to narrow down the search area and guide the rescue workers to the right location. The rescuers also need to know the inner state of the building to assess the damage, anticipate likely risks, and determine the best rescue strategy. Any ground-based robotic device that is sent into the damaged area to obtain this information will very likely get entangled in rubble. So what are the alternatives?

What if you could send in a tiny helicopter, a micro air vehicle (MAV), to do the job? A MAV benefits from its vertical degree of freedom: it can evade obstacles on floors by changing altitude, and it can use vertical spaces, such as elevator shafts, to quickly access the most badly damaged levels. Within minutes, it can bring back video images of the wrecked interior. In addition to showing the locations of survivors, the images can help civil engineers assess the stability of the structure to help protect both rescue workers and the victims they are trying to save.


Figure 1.Operational model of the PIXHAWK Cheetah.

The PIXHAWK Cheetah MAV (see Figure 1) is such a device. With onboard intelligence and autonomous-navigation capabilities, the Cheetah can be an extremely useful tool for rescue missions in hazardous environments. This four-rotor MAV is 55cm in diameter, measured across its rotor blades. Powered by lithium polymer batteries, the Cheetah can fly for up to 12 minutes. It features four cameras with flexible mountings and an onboard image-processing computer (see Figure 2). The Cheetah's computer-vision system enables it to assess and solve navigational problems with no outside help.


Figure 2.Computer-aided-design model of the PIXHAWK Cheetah, showing the four camera mounts.

Some robotic devices send back images by means of wireless communication. Current MAV designs use wireless image transmission and offboard processing1,2 or global-positioning systems (GPS) for controlling the vehicle.3 A wireless-transmission system, however, has limitations. Modern buildings often contain large amounts of reinforced concrete, which hinders radio signals. Thus, wireless communication can be problematic or infeasible in many situations. Ground robots get around this problem by dragging a data-link cable behind them. However, a cable can easily get stuck in debris, thus trapping the robot.

A better solution is use of a MAV that records images and brings the video data back to the operator. Although the Cheetah offers wireless transmission as an option, it does not rely on it. Its autonomous-navigation system enables it to record video images and find its way back to the operator in minutes.

The Cheetah's onboard PIXHAWK software has been designed specifically for computer vision. Stereo-vision video images are stored on a central image hub and can be read by different image-processing pipelines in parallel, such as for localization (identification of position), mapping, and pattern recognition (see Figure 3). In addition to this navigation pipeline, the system can, in parallel, detect and track objects, for example, by adjusting a camera's field of view to keep a particular object in sight.


Figure 3.Schematic of the PIXHAWK onboard pattern-recognition system. The system identifies particular parts of images, such as unique edges, and then matches those elements against a database of stored patterns that the system is searching for, such as the faces of persons.

A MAV that relies heavily on visual images presents a challenge to designers. In contrast to systems that use mostly information from sensors, such as for GPS or lidar (light detection and ranging), a system based on video images works more slowly. Video images become more detailed for longer computer-processing times. This lag time can present navigational hurdles for an autonomous MAV.

The PIXHAWK design system has a key feature, novel for MAV-sized vehicles, that compensates for this problem. The system keeps precise track of when an image or measurement is taken and records the inertial state of the vehicle from one moment to the next. This capability is enabled by the custom-designed PIXHAWK electronics. The fusion of visual and inertial data allows the vehicle to always know its position and to see—and avoid—potential trouble spots in its path. As the system evolves, new algorithms can be derived to solve classic vision-only problems, such as estimating the current camera pose. This can improve the speed and robustness of the localization.4

The custom PIXHAWK hardware and software system design has been proved airworthy in several public demonstrations.5 The PIXHAWK team won the first-place award in indoor autonomous navigation at the 2009 European Micro Air Vehicle Conference and Flight Competition. The system was also successfully demonstrated at the 2010 European Conference on Computer Vision, one of the three major events in the field.

While our early work has been focused on a novel and tightly synchronized system design and based on well-established visual-localization techniques, our research is now moving in a new direction. We are focusing on improved computer algorithms for vision localization and obstacle avoidance using passive sensors in addition to video images. We feel confident that this work will lead to MAVs with even greater autonomous capabilities in disaster-zone environments.


Lorenz Meier, Friedrich Fraundorfer, Marc Pollefeys
Computer Vision and Geometry Laboratory
Department of Computer Science
ETH Zurich
Zurich, Switzerland

References:
2. M. Blösch, S. Weiss, D. Scaramuzza, R. Siegwart, Vision-based MAV navigation in unknown and unstructured environments, Proc. IEEE Int'l Conf. Robot. Automat. (ICRA), pp. 21-28, 2010. doi:10.1109/ROBOT.2010.5509920
3. G. Hofiann, D. Rajnarqan, S. Waslander, D. Dostal, J. S. Jan, C. Tomlin, The Stanford testbed of autonomous rotorcraft for multi agent control (STARMAC), Proc. Dig. Avion. Syst. Conf. (DASC04), pp. 12.E.4-121.10, 2004.
4. F. Fraundorfer, P. Tanskanen, M. Pollefeys, A minimal case solution to the calibrated relative pose problem for the case of two known orientation angles, Comput. Vis. ECCV, Lect. Notes Comput. Sci. 6314, pp. 269-282, 2010.
5. L. Meier, P. Tanskanen, F. Fraundorfer, M. Pollefeys, PIXHAWK competition and demonstration videos. http://pixhawk.ethz.ch/videos/ Accessed 17 October 2010.
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research