Share Email Print
cover

Proceedings Paper

Immersive remote monitoring of urban sites
Author(s): Rakesh Kumar; Harpreet S. Sawhney; Aydin Arpa; Supun Samarasekera; Manoj Aggrawal; Stephen Hsu; D. Nister; Keith Hanna
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

In a typical security and monitoring system a large number of networked cameras are installed at fixed positions around a site under surveillance. There is generally no global view or map that shows the guard how the views of different cameras relate to one another. Individual cameras may be equipped with pan, tilt and zoom capabilities, and the guard may be able to follow an intruder with one camera, then pick him up with another. But such tracking can be difficult, and hand off between cameras disorienting. The guard does not have the ability to continually shift his viewpoint. More over current systems do not scale up with the number of cameras. The system becomes more unwieldy as cameras are added to the system. In this paper, we will present the system and key algorithms for remote immersive monitoring of an urban site using a blanket of video cameras. The guard monitors the world using a live 3D model, which is constantly being updated from different directions using the multiple video streams. The world can be monitored remotely from any virtual viewpoint. The observer can see the entire scene from far and get a bird's eye view or can fly/zoom in and see activity of interest up close. A 3D-site model is constructed of the urban site and used as glue for combining the multiple video streams. Moreover each of the video cameras has smart image processing associated with it, which allows it to detect moving and new objects in the scene and recover their 3D geometry and pose of the camera with respect to the world model. Each video stream is overlaid on top of the video model using the recovered pose. Virtual views of the scene are generated by combining the various video streams, the background 3D model and the recovered 3D geometry of foreground objects. These moving objects are highlighted on the 3D model and used as a cue by the operator to direct his viewpoint.

Paper Details

Date Published: 6 August 2002
PDF: 11 pages
Proc. SPIE 4741, Battlespace Digitization and Network-Centric Warfare II, (6 August 2002); doi: 10.1117/12.478715
Show Author Affiliations
Rakesh Kumar, Sarnoff Corp. (United States)
Harpreet S. Sawhney, Sarnoff Corp. (United States)
Aydin Arpa, Sarnoff Corp. (United States)
Supun Samarasekera, Sarnoff Corp. (United States)
Manoj Aggrawal, Sarnoff Corp. (United States)
Stephen Hsu, Sarnoff Corp. (United States)
D. Nister, Sarnoff Corp. (United States)
Keith Hanna, Sarnoff Corp. (United States)


Published in SPIE Proceedings Vol. 4741:
Battlespace Digitization and Network-Centric Warfare II
Raja Suresh; William E. Roper; William E. Roper, Editor(s)

© SPIE. Terms of Use
Back to Top