For a long time, seeing through walls remained a prerogative of science fiction heroes. Recent research on wireless systems, however, has moved this capability toward reality. These wireless systems operate by emitting radio frequency (RF) signals that can traverse walls and reflect off of different objects in the environment (e.g., the human body). By capturing and analyzing these reflections, such systems allow us to sense the environment, even through walls. This provides access to new types of visual information.
Previous efforts on imaging objects behind occlusions have fallen in two broad categories. First, research in the graphics and vision communities has demonstrated that hidden shapes may be imaged by using light that bounces off corner reflectors in a scene.1–3 This imaging approach, however, requires the placement of corner reflectors. In addition, the hidden shape must be fully static during the acquisition period. In the second category, radar-based systems are used to detect humans through walls.4–8 Unfortunately, these systems have very limited resolution. In particular, they can be used to detect a person behind a wall as a point in space, but they cannot recover any of her/his body features.
To transcend these limitations, we developed RF-Capture,9 which is the first system that can capture a human figure (coarse skeleton) when the person is fully occluded (i.e., in the absence of any path for visible light). The development of this system is part of our ongoing research, in which we have already demonstrated that it is possible to extract and exploit RF reflections to enable powerful applications. These applications range from seeing through walls with ‘WiFi,’4 to non-contact sensing of human heartbeats.10
The major challenge in the development of RF-Capture arises from the fact that not all body parts reflect RF signals back to a wireless device. In particular, at the frequency ranges that traverse walls, human limb curves act as ideal reflectors. They may thus deflect the signal away from the device rather than back to it. This is because RF signals that traverse walls have wavelengths of many centimeters, i.e., larger than the surface roughness of human body parts. Each part therefore acts as a perfect reflector. At every point in time, the RF device captures signal reflections from only a subset of the human body parts, and the device lacks the semantics to determine which body part is reflecting the signal back at that instant. Furthermore, as a person moves, the reflecting limbs vary. For example, at one point, a person's left hand may reflect the signal back (but not his right hand or head). At other times, however, his head may reflect the signal back (but neither of his hands).
To overcome these challenges, we have introduced two main algorithmic innovations for RF-Capture. The first is a coarse-to-fine algorithm that efficiently scans 3D space to look for RF reflections of various human limbs. This algorithm also generates 3D snapshots of the recorded reflections. The second algorithmic component of RF-Capture exploits the fact that consecutive RF snapshots tend to expose different body parts and diverse perspectives of the same body part (because of human motion). This component therefore extracts features from RF snapshots across time, for the identification of human body parts. It then stitches these body parts to recover the human figure.
The setup of RF-Capture is shown in Figure 1, along with the recovered figure of a person behind a wall. This output is shown as a heat map, where the background is navy blue, and the various body parts (reflectors) are red, orange, and yellow. We can leverage captured figures such as this to deliver novel capabilities (e.g., identifying users behind a wall). In particular, we can incorporate the captured figure into a classifier (principal component analysis and support vector machine) to identify different subjects from behind a wall. Our results demonstrate that the classification accuracy is more than 90% when we distinguish between 14 users.9 In another application, we used RF-Capture to trace human limbs. Specifically, we have shown that our system can be used to track the palm of a user to within a couple of centimeters. For instance, we can trace letters that a user ‘writes’ in the air, from behind a wall (see Figure 2).
Figure 1. Setup of the RF-Capture system for sensing a human figure behind a wall. The device is placed behind a wall, from where it emits low-power radio signals. These signals traverse the wall and reflect off different objects in the environment, including the human body. The device then captures and analyzes these reflections so that a coarse human figure can be recovered. The output of the analysis is a heat map (shown on the right), in which the background is shown in navy blue and the various reflectors (i.e., limbs) are red, orange, and yellow.
Figure 2. The output of RF-Capture (blue) and the Kinect sensor for Xbox (red) for two sample experiments. In these tests, a human subject traced the letters ‘S’ and ‘U’ in mid-air, from behind a wall.
In summary, we have developed a new radio frequency sensing system—RF-Capture—which is the first device capable of capturing a hidden human figure. Our system marks an important step toward motion capture through occlusions. We are in the process of commercializing this research for the elderly monitoring market. Our use of wireless signal reflections can be used to trace the 3D motion of the elderly throughout their home (even through walls). This enables motion to be monitored, and falls to be automatically detected, without requiring seniors to hold or wear any device. Beyond elderly monitoring, our research has various applications in smart environments, gaming, and virtual reality.
Massachusetts Institute of Technology
Fadel Adib is a PhD candidate in electrical engineering and computer science. His research involves wireless networks and sensing systems. In 2014 he was included on Technology Review's list of the world's top 35 innovators under 35, and in 2015, on the Forbes list of 30 under 30.
1. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, R. Raskar, Recovering three-dimensional shape around a corner using ultrafast time-of-flight imaging, Nat. Commun.
3, p. 745, 2012. doi:10.1038/ncomms1747
2. A. Kirmani, T. Hutchison, J. Davis, R. Raskar, Looking around the corner using transient imaging, IEEE Int'l Conf. Comp. Vision 12, p. 159-166, 2009.
3. F. Heide, L. Xiao, W. Heidrich, M. B. Hullin, Diffuse mirrors: 3D reconstruction from diffuse indirect illumination using inexpensive time-of-flight sensors, IEEE Conf. Comp. Vision Pattern Recognit.
, p. 3222-3229, 2014. doi:10.1109/CVPR.2014.418
4. F. Adib, D. Katabi, See through walls with wi-fi!, Proc. Assoc. Comp. Machin. Spec. Interest Group Data Commun. (SIGCOMM), 2013.
5. T. S. Ralston, G. L. Charvat, J. E. Peabody, Real-time through-wall imaging using an ultrawideband multiple-input multiple-output (MIMO) phased array radar system, IEEE Int'l Symp. Phased Array Syst. Technol.
, p. 551-558, 2010. doi:10.1109/ARRAY.2010.5613314
6. G. L. Charvat, L. C. Kempel, E. J. Rothwell, C. M. Coleman, E. Mokole, An ultrawideband (UWB) switched-antenna-array radar imaging system, IEEE Int'l Symp. Phased Array Syst. Technol.
, p. 543-550, 2010. doi:10.1109/ARRAY.2010.5613313
7. F. Adib, Z. Kabelac, D. Katabi, R. C. Miller, 3D tracking via body radio reflections, USENIX Symp. Network. Syst. Design Implem., 2014.
8. F. Adib, Z. Kabelac, D. Katabi, Multi-person localization via RF body reflections, USENIX Symp. Network. Syst. Design Implem., 2015.
9. F. Adib, C.-Y. Hsu, H. Mao, D. Katabi, F. Durand, Capturing the human figure through a wall, Proc. ACM SIGGRAPH Asia
34, p. 219, 2015. doi:10.1145/2816795.2818072
10. F. Adib, H. Mao, Z. Kabelac, D. Katabi, R. C. Miller, Smart homes that monitor breathing and heart rate, Proc. Annu. ACM Conf. Human Factors Comp. Syst.
33, 2015. doi:10.1145/2702123.2702200