Share Email Print
cover

Proceedings Paper

Embedding domain information in backpropagation
Author(s): George M. Georgiou; Cris Koutsougeras
Format Member Price Non-Member Price
PDF $14.40 $18.00

Paper Abstract

The search space for backpropagation (BP) is usually of high dimensionality which shows convergence. Also, the number of minima abound, and thus the danger to fall in a shallow one is great. In order to limit the search space of BP in a sensible way, we incorporate domain knowledge in the training process. A Two-phase Backpropagation algorithms is presented. In the first phase the direction of the weight vectors of the first (and possibly the only) hidden layer are constrained to remain in the same directions as, for example, the ones of linear discriminants or Principal Components. The directions are chosen based on the problem at hand. Then in the second phase, the constraints are removed and standard Backpropagation algorithm takes over to further minimize the error function. The first phase swiftly situates the weight vectors in a good position (relatively low error) which can serve as the initialization of the standard Backpropagation. Other speed-up techniques can be used in both phases. The generality of its application, its simplicity, and the shorter training time it requires, make this approach attractive.

Paper Details

Date Published: 20 August 1992
PDF: 7 pages
Proc. SPIE 1706, Adaptive and Learning Systems, (20 August 1992); doi: 10.1117/12.139948
Show Author Affiliations
George M. Georgiou, Tulane Univ. (United States)
Cris Koutsougeras, Tulane Univ. (United States)


Published in SPIE Proceedings Vol. 1706:
Adaptive and Learning Systems
Firooz A. Sadjadi, Editor(s)

© SPIE. Terms of Use
Back to Top