Share Email Print
cover

Proceedings Paper

Point feature matching adopting Walsh transform
Author(s): El-Sayed H. El-Konyaly; Sabry Fouad Saraya; Wael Wageeh Abd Almageed Al-Khazragy
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

This paper introduces a new method to solve the problem of matching correspondent feature points across two images containing a moving rigid object. Two successive time-varying images are used. Edge points are first extracted using a 3 by 3 Laplacian mask. Walsh transformation is then applied to the feature points in both images. The choice of Walsh transformation in contrast to other orthogonal transforms is a direct result of its computational simplicity and its interpretative meaning in terms of information contained in the spatial domain. Two premises are applied as matching rules. The first involves the speed of object and imaging system, while the other involves the selection of the best match from the set of candidate matches. Unlike other matching techniques, the computational complexity of the proposed technique does not grow up with to the number of detected feature points in either of the two images. This characteristic gives the technique a great flexibility. Experimental results are given and assessed in terms of both accuracy and computational complexity.

Paper Details

Date Published: 26 September 1997
PDF: 11 pages
Proc. SPIE 3208, Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling, (26 September 1997); doi: 10.1117/12.290330
Show Author Affiliations
El-Sayed H. El-Konyaly, Mansoura Univ. (Egypt)
Sabry Fouad Saraya, Mansoura Univ. (Egypt)
Wael Wageeh Abd Almageed Al-Khazragy, Mansoura Univ. (Egypt)


Published in SPIE Proceedings Vol. 3208:
Intelligent Robots and Computer Vision XVI: Algorithms, Techniques, Active Vision, and Materials Handling
David P. Casasent, Editor(s)

© SPIE. Terms of Use
Back to Top