Share Email Print

Proceedings Paper

Probabilistic neural network with reflected kernels
Author(s): George W. Rogers; Carey E. Priebe; Jeffrey L. Solka
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

Probabilistic neural networks (PNN) build internal density representations based on the kernel or Parzen estimator and use Bayesian decision theory in order to build up arbitrarily complex decision boundaries. As in the classical kernel estimator, the training is performed in a single pass of the data and asymptotic convergence is guaranteed. Asymptotic convergence, while necessary, says little about discrete sample estimation errors. These errors can be quite large. One problem that arises using either the kernel estimator or the PNN is when one or more of the densities being estimated has a discontinuity. This commonly leads to a pdfL(infinity ) expected error on the order of the amount of the discontinuity which can in turn lead to significant classification errors. By using the method of reflected kernels, we have developed a PNN model that does not suffer from this problem. The theory of reflected kernel PNNs, along with their relation to reflected kernel Parzen estimators, is presented along with finite sample examples.

Paper Details

Date Published: 1 September 1993
PDF: 11 pages
Proc. SPIE 1962, Adaptive and Learning Systems II, (1 September 1993); doi: 10.1117/12.150591
Show Author Affiliations
George W. Rogers, Naval Surface Warfare Ctr. (United States)
Carey E. Priebe, Naval Surface Warfare Ctr. (United States)
Jeffrey L. Solka, Naval Surface Warfare Ctr. (United States)

Published in SPIE Proceedings Vol. 1962:
Adaptive and Learning Systems II
Firooz A. Sadjadi, Editor(s)

© SPIE. Terms of Use
Back to Top