Share Email Print
cover

Proceedings Paper

Fast FFT-based distortion-invariant kernel filters for general object recognition
Author(s): Rohit Patnaik; David Casasent
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

General object recognition involves recognizing an object in a scene in the presence of several distortions and when its location is not known. Since the location of the test object in the scene is unknown, a classifier needs to be applied for different locations of the object over the test input. In this scenario, distortion-invariant filters (DIFs) are attractive, since they can be applied (efficiently and fast) for different shifts using the fast Fourier transform (FFT). A single DIF handles different object distortions (e.g. all aspect views and some range of scale and depression angle). In this paper, we show a new approach that combines DIFs and the kernel technique (to form "kernel DIFs"), addresses the need for fast on-line filter shifts, and improves performance. We consider polynomial and Gaussian kernels (polynomial results are emphasized here). We consider kernel versions of the synthetic discriminant function (SDF) filter and DIFs that minimize an energy function such as the minimum average correlation energy (MACE) filter. We provide insight into and compare several different formulations of kernel DIFs. We emphasize proper formulations of kernel DIFs and provide data in many cases to show that they perform better. We recall that kernel SDF filters are the most computationally efficient ones and thus emphasize them. We use the performance of the minimum noise and correlation energy (MINACE) filter as the baseline to which we compare kernel SDF filter results. We consider the classification of two true-class objects and the rejection of unseen clutter and unseen confuser-class objects with full 360° aspect view distortions and with a range of scale distortions present (shifts of all test images are addressed for the first time, for kernel DIFs); we use CAD (computer-aided design) infrared (IR) data to synthesize objects with the necessary distortions and we use only problematic (blob) real IR clutter data.

Paper Details

Date Published: 19 January 2009
PDF: 18 pages
Proc. SPIE 7252, Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques, 725202 (19 January 2009); doi: 10.1117/12.805411
Show Author Affiliations
Rohit Patnaik, Carnegie Mellon Univ. (United States)
David Casasent, Carnegie Mellon Univ. (United States)


Published in SPIE Proceedings Vol. 7252:
Intelligent Robots and Computer Vision XXVI: Algorithms and Techniques
David P. Casasent; Ernest L. Hall; Juha Röning, Editor(s)

© SPIE. Terms of Use
Back to Top