SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
SPIE Photonics West 2019 | Call for Papers

2018 SPIE Optics + Photonics | Register Today



Print PageEmail PageView PDF

Defense & Security

Compressive urban sensing

Persistent surveillance in urban environments requires efficient sensing operations that can image sparse scenes for quick turnaround and reliable situational awareness.
27 September 2012, SPIE Newsroom. DOI: 10.1117/2.1201209.004413

The detection, localization, and tracking of targets in urban canyons and inside enclosed structures using radio frequency sensors is pertinent to a variety of civil and military applications. The ultimate goal is to achieve situational awareness in a fast and reliable manner. One of the challenges to achieving that goal involves the growing demand on radar systems to deliver high-resolution images and more accurate information. This demand in turn increases the number of data samples to be recorded, stored, and subsequently processed.

The emerging field of compressive sensing (CS), which enables reconstruction of a sparse signal from far fewer non-adaptive measurements, provides an alternative for data reduction in radar imaging without compromising image quality. For persistent surveillance in urban environments, these techniques offer efficient sensing operations that culminate in quick turnaround, and reliable, actionable intelligence. We have devised CS-based approaches that streamline data acquisition with fewer space-time samples. Our methods also provide high-resolution imaging capability for both stationary and moving target detection and localization in challenging urban environments.

Figure 1. Change-detected images of a person making a slight upward movement of the head. Top: Image obtained using a full data set and conventional image-formation scheme. Bottom: Image obtained using 5% of the data volume and sparsity-driven imaging.

For moving target indication, we combine sparsity-driven radar imaging and change detection.1, 2 We first apply change detection to subtract data frames acquired by imaging radar over successive probes of the scene. This step mitigates the heavy clutter caused by strong reflections from exterior and interior walls and also removes stationary objects present in the enclosed structure, rendering a densely populated scene sparse. We then apply a sparsity-driven image reconstruction scheme to recover the scene image. The change detection and sparsity-driven imaging together allow us to exploit compression in data collection and processing. We have evaluated the performance of this scheme using real data for a variety of possible human motions, ranging from translational motions, such as walking, to sudden short movements of the limbs, head, and torso. The result in every case is ‘clean’ enhanced images with successful localization of the moving targets and a significant reduction in data volume.

Figure 1 provides change-detected images of a human making a slight head movement while sitting in a populated room. We obtained the top image using full data volume and a conventional image-formation algorithm, whereas for the bottom image we used only 5% of the data volume and a sparsity-driven scene reconstruction.

Figure 2. Images of a single target inside a room. The front wall (indicated by the lower red dashed rectangle in each image) is made of 20cm-thick solid concrete blocks, whereas the back wall (dashed rectangle at top) is a 30cm-thick reinforced concrete wall. The solid rectangle indicates the true target position. Left: Image obtained using the full data set and conventional image-formation scheme without wall clutter mitigation. Right: Image made using 5% of the data volume together with joint wall clutter mitigation and compressive sensing (CS) scene reconstruction.

Figure 3. Reconstructions of a scene consisting of a single stationary target behind a 15cm-thick solid concrete block wall using 20% of the data volume. Top: Image obtained using classical CS. Bottom: Image shows the partial-sparsity-based reconstruction of the sparse part of the scene.

For imaging stationary targets behind walls, access to a background scene without the targets of interest present is generally not feasible. Consequently, means other than change detection for suppressing clutter caused by wall reflections must be found. Existing wall mitigation approaches assume access to full data volume, which contradicts the underlying premise of CS. We have combined wall clutter mitigation with CS using a reduced set of data measurements.3 Specifically, we have shown that direct application of wall clutter mitigation techniques is effective, provided that the same reduced set of frequencies or time samples is used at each antenna position.

However, having the same frequency observations or time samples may not always be possible owing to spectrum occupancy by competing wireless services or intentional interference. For such cases, we apply CS individually at each antenna, using a reduced set of randomly selected frequencies, to reconstruct the corresponding range profile. The Fourier transform of the range profile for each antenna then provides the clutter mitigation methods with the scene response at the same set of frequencies for all antennas. Figure 2 shows images of a single target inside a room. We obtained the left image using the full data set and conventional image formation without wall clutter mitigation. For the right-hand image, we used 5% of the data volume together with combined wall mitigation and CS scene reconstruction.

Instead of using wall clutter mitigation as a preprocessing step to image formation, we have also applied the idea of partial sparsity to through-the-wall imaging under reduced data volume.4 Partially sparse reconstruction considers the case where it is known beforehand that the scene being imaged consists of a sparse part and a dense part. For through-the-wall imaging, the approach translates to scene reconstruction involving a few stationary targets of interest when the building layout is assumed known. This information (thickness, extent, and location of the walls) may be available either through building blueprints or from prior surveillance operations. In our case, we treated the dense part of the image corresponding to the exterior and interior walls as known.

Figure 3 shows the results of partially sparse scene reconstruction using real data measurements. We obtained the top image by classical CS, which failed to detect the target due to the strong wall clutter. The bottom image shows the partial-sparsity-based reconstruction of the sparse part of the scene, which successfully localized the target.

In summary, increasing the availability of reliable situational awareness in real time in urban sensing applications is of primary importance. Massive amounts of data collected and processed by high-resolution imaging systems present numerous challenges towards achieving this objective. We have developed methods and algorithms that bring us a step closer to providing persistent real-time radar surveillance in urban environments. We are currently pursuing CS-based approaches for efficient urban sensing operations to determine building layouts and achieve enhanced target detection and localization through multipath exploitation.

Fauzia Ahmad, Moeness Amin
Villanova University
Villanova, PA

Fauzia Ahmad received her MS and PhD in electrical engineering in 1996 and 1997, respectively, both from the University of Pennsylvania. She is currently a research associate professor and director of the Radar Imaging Lab at the Center for Advanced Communications at Villanova University. She is a senior member of SPIE. She also chairs the SPIE Compressive Sensing conference and serves on the technical program committee of the SPIE Radar Sensor Technology conference.

Moeness Amin received his PhD in electrical engineering from the University of Colorado (1984). He has been on the faculty of the Department of Electrical and Computer Engineering at Villanova University since 1985. In 2002, he became director of the Center for Advanced Communications. He is a Fellow of SPIE and serves as a member of the technical program committee for the SPIE Compressive Sensing conference and the Wireless Sensing, Localization, and Processing conference.

1. F. Ahmad, M. G. Amin, Sparsity-based change detection of short human motion for urban sensing, IEEE SAM'12, p. 421-424, 2012.
2. M. G. Amin, F. Ahmad, W. Zhang, A compressive sensing approach to moving target indication for urban sensing, IEEE RadarCon'11, p. 509-512, 2011.
3. E. Lagunas, M. Amin, F. Ahmad, M. Najar, Wall mitigation techniques for indoor sensing within the CS framework, IEEE SAM'12, p. 213-216, 2012.
4. F. Ahmad, M. G. Amin, Partially sparse reconstruction of behind-the-wall scenes, Proc. SPIE 8365, p. 83650W, 2012. doi:10.1117/12.919527