Bringing wireless video sensor networks into practice

Large-scale wireless video sensor networks could have important application in health monitoring, environmental tracking, and surveillance.
04 April 2007
Zhihai He

Recent history shows a clear trend toward pushing the research frontier for machine-based signal processing. The technology has evolved rapidly from handling data in a single dimension, such as speech and acoustic data (telephone), to 2D image processing (fax and camera), to 3D video processing and computer vision (digital videos and virtual reality). This is because vision is the dominant channel through which people perceive the world. Visual information, coupled with intelligent vision processing, provides a rich set of important data for situational awareness, event understanding, and decision making.

Currently, much research effort is focused on developing ‘simple’ sensors, such as acoustic, temperature, and moisture detectors. My colleagues and I at the Video Processing and Networking Lab at the University of Missouri, Columbia (UMC), believe that during the coming years, the research and development community will continue to make significant strides in sensor design, wireless communication, and signal processing to develop large-scale, low-cost, and more sophisticated sensor networks for intelligent information gathering. Using such advances to bring about wireless video sensor networks (WVSNs) could have wide-ranging societal effects.


Figure 1. This system diagram illustrates the DeerNet wireless video sensor network.

A recent National Science Foundation (NSF) Workshop on Environmental Sensor Networks determined that wireless video sensors will offer major scientific progress in wildlife research. The US Department of Defense, the Defense Advanced Research Projects Agency (DARPA), and the US Air Force Research Laboratory have envisioned rapidly deployable and self-assembling wireless sensor networks on a cooperative micro unmanned air vehicle (UAV) team that will offer unprecedented capability in battlefield intelligence and information dominance. Homeland Security also recognizes that large-scale, low-cost WVSNs will play a vital role in infrastructure security and emergency response. Two sample research projects demonstrate the potential of WVSNs.

Since November 2003, funded by the NSF, we have been collaborating with the Sinclair School of Nursing at UMC to develop a WVSN to assist the independent living of elderly people.1 The WVSN will collect visual information about their activities at home, and use intelligent vision processing algorithms to sense abnormal activities, such as falls and unusual behavior patterns. When such behavior is detected, an alert will automatically be sent to caregivers for immediate attention.

The activity-monitoring video recorded over an extended period of time will be analyzed using a statistical model and fused with other sensor data, such as information about motion and gait, to assess a person's functional status. This will essentially transform a video camera into a health-watch system. The activity-monitoring video will also be distilled into a compact database so that caregivers, nurses, or doctors can review hundreds of hours of activity-monitoring videos within a couple of hours to find indicators for potential nursing or medical intervention. This will significantly improve the efficiency of eldercare practice.

Another example of a WVSN is our DeerNet project (see Figure 1).2,3 Supported by the NSF, we have been collaborating with the UMC Agriculture School and the Missouri Department of Conservation to develop a network of low-power multisensor systems. The goal is to collect important visual information about wildlife activities, understand their fine-scale behavior, and monitor their close interactions for disease propagation modeling. State-of-the-art wildlife tracking technologies, including radio and GPS (global positioning system) tracking, are able to provide location information. With only location information, however, we do not know what the animals are doing, or how or why they are doing it. Using images and videos collected from animal-mounted cameras, we are able to apply advanced scene classification and object recognition algorithms and use fusion with data from other sensors (e.g., GPS and motion) to extract important visual information. Then we can develop statistical models about animals' food selection, activity patterns, and close interactions.

Realizing large-scale WVSNs and exploiting their promise in health and security monitoring, environmental tracking, and battlefield surveillance means carefully addressing a number of major research issues. These include low-power video compression system design, intelligent wireless transmission scheduling and energy minimization, control and optimization of the quality of network service, and visual information aggregation, fusion, and summarization.


Zhihai He
University of Missouri
Columbia, MO

Zhihai He is an assistant professor of electrical engineering at the University of Missouri, Columbia. He earned his PhD degree from the University of California, Santa Barbara.


Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research