Share Email Print

Proceedings Paper

Photonic Neurocomputers And Learning Machines
Author(s): Nabil H. Farhat
Format Member Price Non-Member Price
PDF $17.00 $21.00

Paper Abstract

The study of complex multidimensional nonlinear dynamical systems and the modeling and emulation of cognitive brain-like processing of sensory information (neural network research), including the study of chaos and its role in such systems would benefit immensely from the development of a new generation of programmable analog computers capable of carrying out collective, nonlinear and iterative computations at very high speed. The massive interconnectivity and nonlinearity needed in such analog computing structures indicate that a mix of optics and electronics mediated by judicial choice of device physics offer benefits for realizing networks with the following desirable properties: (a) large scale nets, i.e. nets with high number of decision making elements (neurons), (b) modifiable structure, i.e. ability to partition the net into any desired number of layers of prescribed size (number of neurons per layer) with any prescribed pattern of communications between them (e.g. feed forward or feedback (recurrent)), (c) programmable and/or adaptive connectivity weights between the neurons for self-organization and learning, (d) both synchroneous or asynchroneous update rules be possible, (e) high speed update i.e. neurons with lisec response time to enable rapid iteration and convergence, (f) can be used in the study and evaluation of a variety of adaptive learning algorithms, (g) can be used in rapid solution by fast simulated annealing of complex optimization problems of the kind encountered in adaptive learning, pattern recognition, and image processing. The aim of this paper is to describe recent efforts and progress made towards achieving these desirable attributes in analog photonic (optoelectronic and/or electron optical) hardware that utilizes primarily incoherent light. A specific example, hardware implementation of a stochastic Boltzmann learning machine, is used as vehicle for identifying generic issues and clarify research and development areas for further advancement of the field, in particular the development of architectures and methodologies for learning in self-organizing networks that employ a new type of quasi-nonvolatile storage medium: electron trapping material.

Paper Details

Date Published: 22 May 1990
PDF: 20 pages
Proc. SPIE 1150, Spatial Light Modulators and Applications III, (22 May 1990); doi: 10.1117/12.962192
Show Author Affiliations
Nabil H. Farhat, University of Pennsylvania (United States)

Published in SPIE Proceedings Vol. 1150:
Spatial Light Modulators and Applications III
Uzi Efron, Editor(s)

© SPIE. Terms of Use
Back to Top
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research
Forgot your username?