Mobile sensor networks (MSNs) have wide applications in military target detection and tracking, and search and rescue following disasters, to give just two examples. They consist of a collection of sensor nodes that move in space over time. Each mobile node can be equipped with a variety of sensors, such as laser range finders, cameras, sonar, and heat detectors. Most of these devices operate on the basis of a scalar field, i.e., they associate a value to every point in the area that they cover. A key challenge is to conduct scalar field mapping over a large area of interest by getting several sensors to share data at the same time (known as multi-agent cooperative sensing). Solutions to scalar field mapping that rely on interactions between a single central and several ancillary sensor nodes may not suit large MSNs because the integrity of the entire network depends on the reliability of the central node. These approaches also have limited scalability because all the sensors need to send data back to the central node, which generates large amounts of data and results in delays in communication.
Figure 1. Framework of the distributed sensor fusion algorithm.
Figure 2. Snapshots of multiple mobile sensors flocking together and building a map of the scalar field. Each sensor observed the cells within its sensing range (blue circle) before exchanging these observations with its neighbors and obtaining the final estimate of the value at these observed cells via through consensus filters. In these snapshots, only two mobile sensors (blue squares) have information about the virtual leader. The white line represents the trajectory of two informed mobile sensors.
Cooperative sensing in MSNs was recently studied by researchers in control engineering.1–4 Most existing work in this area focuses on target(s) tracking,5 environmental sampling, modeling and scope of coverage,1, 4 and radiation mapping.2 Scalar field mapping based on multi-agent cooperative sensing is still an open research problem that has yet to be solved satisfactorily. Existing mapping work does not incorporate estimate confidence (i.e., weight) as part of its sensor quality control, which makes it difficult to evaluate accuracy. By contrast, in our approach each sensor only interacts with its neighbors. It then uses local observations, with their own confidence, to find the best estimate of the value at each location in the scalar field.
To build a scalar field map, we first developed a distributed sensor fusion algorithm—designed to integrate the measurements from each sensor and its neighbors (see Figure 1)—to estimate the value of the field as the sensors moved. We partitioned the field into cells using grids, and associated a constant scalar value to each cell estimated from each sensor's own measurement combined with that of other nearby sensors.
We then created two so-called consensus filters. We designed filter 1 to estimate the value of the field at each time step with its own confidence. We used filter 2 to reach agreement among the confidences. This process is known as the spatial estimate phase. During the movement of each sensor node, multiple spatial estimates were built of each cell associated with the corresponding confidences. These estimates were subsequently fused iteratively through a weighted average protocol in a ‘temporal’ estimate phase. To fully cover the field, we devised a path-planning strategy in which a virtual leader would guide the MSNs following a zig-zag path.
Finally, we developed a distributed flocking control algorithm, which mimicked the movement of a shoal of fish, to enable the MSNs to track their virtual leader and move at the same speed without collision. In this algorithm, all mobile sensor nodes form a network where each sensor may have up to six neighbors equally spaced near it, achieved through forces of attraction and repulsion. Figure 2 shows our progress in mapping a scalar field for a seven-sensor network moving through a rectangular field. We found that the scalar field could be modeled by multiple Gaussian distribution functions.
In summary, we have described a cooperative sensing and control algorithm that enables an MSN to build a map of an unknown scalar field. The proposed distributed sensor fusion algorithm consists of two different filters that find consensus on the estimates and the confidences among sensor nodes. In the future, we plan to implement this algorithm on real sensor nodes. We will also investigate how formation of the network and motion planning affect sensing quality.
Weihua Sheng, Hung La
School of Electrical and Computer Engineering
Oklahoma State University
Weihua Sheng received his PhD in electrical and computer engineering from Michigan State University (2002). His current research interests lie in the general area of intelligent sensing, computation, control, and their applications. His research is supported by the National Science Foundation, the Department of Defense, the Department of Defense Experimental Program to Stimulate Competitive Research, and the Department of Transportation.
Hung La is a PhD student. His research interests include mobile sensor networks and embedded systems.
1. I. Hussein, A kalman filter-based control strategy for dynamic coverage control, Proc. Am. Control Conf., pp. 3271-3276, 2007.
2. H. G. Tanner, R. A. Cortez, R. Lumia, Distributed robotic radiation mapping, Int'l Symp. Exp. Robotics, pp. 147-156, 2009.
3. J. Choi, S. Oh, R. Horowitz, Distributed learning and cooperative control for multi-agent systems, Automatica 45, no. 12, pp. 2802-2814, 2009.
4. F. Zhang, N. E. Leonard, Cooperative filters and control for cooperative exploration, IEEE Trans. Automat. Control 55, no. 3, pp. 650-663, 2010.
5. T. H. Chung, V. Gupta, J. W. Burdick, R. M. Murray, On a decentralized active sensing strategy using mobile sensor platforms in a network, 43rd IEEE Conf. Decision Control, pp. 1914-1919, 2004.