Share Email Print
cover

Proceedings Paper

Temporal coding in neural networks
Author(s): William E. Faller; Scott J. Schreck; M. W. Luttges
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The 'rules' by which information is temporally encoded within biological and artificial neural networks remains poorly understood. To better understand the 'problem' we tested the temporal encoding and extrapolation (generalization) of artificial neural networks on the spiking patterns of 55 simultaneously recorded neurons. The results indicated that to optimize performance a 4-layer network works best both for a feedforward and a recurrent architecture. Further, the results indicate that both hidden layers should have roughly 2 n units and that the learning rate between the input and first hidden layer should be roughly an order of magnitude larger than the learning rates for subsequent layers. Interestingly, the number of hidden units in the final network is consistent with the known histology and function of the neural tissue from which the recordings were obtained. The results suggest that the fan-out architectures typically found in this neural tissue may be crucial to the ability of this organism to function well under novel conditions. And, these results indicate that temporal data may be encoded best in a highly distributed fashion across large numbers of hidden units where the weighted interconnections are largest, more deterministic, in the early layers and smallest, more statistical, in the final layers of the network. Further, the same architectural guidelines developed herein are shown to apply directly to other temporal problems where the information is encoded in a spatially distributed fashion.

Paper Details

Date Published: 19 August 1993
PDF: 11 pages
Proc. SPIE 1966, Science of Artificial Neural Networks II, (19 August 1993); doi: 10.1117/12.152620
Show Author Affiliations
William E. Faller, Univ. of Colorado/Boulder and Frank J. Seiler Research Lab./Air Force Academy (United States)
Scott J. Schreck, Frank J. Seiler Research Lab./Air Force Academy (United States)
M. W. Luttges, Univ. of Colorado/Boulder (United States)


Published in SPIE Proceedings Vol. 1966:
Science of Artificial Neural Networks II
Dennis W. Ruck, Editor(s)

© SPIE. Terms of Use
Back to Top