Share Email Print
cover

Proceedings Paper

Experimental approach for the evaluation of neural network classifier algorithms
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The purpose of this paper is to demonstrate a new benchmark for comparing the rate of convergence in neural network classification algorithms. The benchmark produces datasets with controllable complexity that can be used to test an algorithm. The dataset generator uses the concept of random numbers and linear normalization to generate the data. In a case of a one-layer perceptron, the output datasets are sensitive to weight or bias of the perceptron. A Matlab implemented algorithm analyzed the sample datasets and the benchmark results. The results demonstrate that the convergence time varies based on some selected specifications of the generated dataset. This benchmark and the generated datasets can be used by researchers that work on neural network algorithms and are looking for a straightforward and flexible dataset to examine and evaluate the efficiency of neural network classification algorithms.

Paper Details

Date Published: 30 September 2003
PDF: 7 pages
Proc. SPIE 5267, Intelligent Robots and Computer Vision XXI: Algorithms, Techniques, and Active Vision, (30 September 2003); doi: 10.1117/12.515826
Show Author Affiliations
Masoud Ghaffari, Univ. of Cincinnati (United States)
Ernest L. Hall, Univ. of Cincinnati (United States)


Published in SPIE Proceedings Vol. 5267:
Intelligent Robots and Computer Vision XXI: Algorithms, Techniques, and Active Vision
David P. Casasent; Ernest L. Hall; Juha Roning, Editor(s)

© SPIE. Terms of Use
Back to Top