Share Email Print

Proceedings Paper

Filtered kernel probabilistic neural network
Author(s): George W. Rogers; Carey E. Priebe; Jeffrey L. Solka
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Probabilistic neural networks (PNN) build internal density representations based on the kernel or Parzen estimator and use Bayesian decision theory in order to build up arbitrarily complex decision boundaries. As in the classical kernel estimator, the training is performed in a single pass of the data and asymptotic convergence is guaranteed. One important factor affecting convergence is the kernel width. Theory only provides an optimal width in the case of normally distributed data. This problem becomes acute in multivariate cases. In this paper we present an asymptotically optimal method of setting kernel widths for multivariate Gaussian kernels based on the theory of filtered kernel estimators and show how this can be realized as a filtered kernel PNN architecture. Performance comparisons are made with competing methods.

Paper Details

Date Published: 1 September 1993
PDF: 11 pages
Proc. SPIE 1962, Adaptive and Learning Systems II, (1 September 1993); doi: 10.1117/12.150592
Show Author Affiliations
George W. Rogers, Naval Surface Warfare Ctr. (United States)
Carey E. Priebe, Naval Surface Warfare Ctr. (United States)
Jeffrey L. Solka, Naval Surface Warfare Ctr. (United States)

Published in SPIE Proceedings Vol. 1962:
Adaptive and Learning Systems II
Firooz A. Sadjadi, Editor(s)

© SPIE. Terms of Use
Back to Top