Share Email Print
cover

Proceedings Paper

Video coding based on pre-attentive processing
Author(s): Cagatay Dikici; H. Isil Bozma
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

Attentive robots have visual systems with fovea-periphery distinction and saccadic motion capability. Previous work has shown that spatial and temporal redundancy thus present can be exploited in video coding/streaming algorithms and hence considerable bandwidth efficiency can be achieved. In this paper, we present a complete framework for real-time video coding with integrated pre-attentive processing and show that areas of greatest interest can be ensured of being processed in greater detail. The first step is pre-attention where the goal is to fixate on the most interesting parts of the incoming scene using a measure of saliency. The construction of the pre-attention function can vary depending on the set of visual primitives used. Here, we use Cartesian and Non-Cartesian filters and build a pre-attention function for a specific problem -- namely video coding in applications such as robot-human tracking or video-conferencing. Using the most salient and distinguishing filter responses as the input, system parameters of a neural network are trained using resilient back-propagation algorithm with supervised learning. These parameters are then used in the construction of the pre-attentive function. Comparative results indicate that even with a very limited amount of learning, performance robustness can be achieved.

Paper Details

Date Published: 25 February 2005
PDF: 9 pages
Proc. SPIE 5671, Real-Time Imaging IX, (25 February 2005); doi: 10.1117/12.602122
Show Author Affiliations
Cagatay Dikici, Bogaziçi Univ. (Turkey)
INSA de Lyon (France)
H. Isil Bozma, Bogaziçi Univ. (Turkey)


Published in SPIE Proceedings Vol. 5671:
Real-Time Imaging IX
Nasser Kehtarnavaz; Phillip A. Laplante, Editor(s)

© SPIE. Terms of Use
Back to Top