A new, biologically inspired, optical neural network may provide engineers with a powerful new tool for processing information. Instead of using relatively simple neurons and then relying on their number to provide complexity, Nabil Farhat of the University of Pennsylvania has developed a neuron that itself behaves in a complex way. This "bifurcation neuron" can switch between different modes of operation, including chaotic modes, with just small changes in the incoming signal. This introduces an element of unpredictability that allows the system to find unlearned solutions to input problems. According to Farhat, it is also possible that interactions between these neurons could provide a mechanism for the so-called higher brain functions, such as cognition, complex motor control, and perhaps consciousness.
The bifurcation neuron is similar to its more conventional counterparts in several ways. Like many other artificial neural networks, it imitates its biological counterparts by using sequences or trains of electronic spikes to represent data. Ordinarily, the information is not encoded in the "height" of the spikes but in their frequency, so the neurons' output is decided by integrating the incoming signal over time. If the answer exceeds a certain value, then the neurons will fire.
The problem with this conventional approach to neural networks is that important information is thrown away: the neuron knows how many spikes reached it in a given period, but not when each one arrived. This information could be crucially important, especially if many signals from a particular source arrive at the neuron at the same time. The fact that these spike trains are correlatedarriving in a particular patterncan provide a valuable clue to their meaning. An analogy can be made between spike timing and optical phase. In photography only the combined intensity of light arriving at the film is recorded, the phase information is thrown away. The result is a flat image. In holographic imaging, the phases of incoming rays are allowed to interact through interference, and the result of that interaction is recorded. This is what encodes the image's third dimension: depth. This dramatically increases the amount of information it can handle.
Though this may seem like an engineering approach, Farhat sees biological justification. In particular, the bifurcation neuron has similar behavior patterns to that of the excitable membrane of the squid's axon as modeled by Hodgkin and Huxley in 1952 and the Fitzhugh Nagumo model. These show that the axon membrane becomes increasingly depolarized due to the incoming signals (carried in the form of ions) until it reaches a threshold potential. At this point it abruptly inverts, producing the neuronal action potential and thus, the output signal. Afterward the membrane undergoes a period of no response, followed by another slow buildup. Clearly, a pulse arriving immediately after firing will have a completely different effect on the output of the neuron than a pulse arriving immediately before.
According to Farhat, real complexity comes from combining this behavior with the processing carried out by the neuron's receptors: the dendritic tree. Correlations, caused by synchronicity in the incoming signals, cause a periodic modulation at the excitable membrane. This gives rise to complex ordered patterns of firing that are phase-locked to the periodic modulation and to disordered (chaotic) firing that depends on the amplitude and frequency of the modulation. This way the bifurcating neuron detects coherence or meaning in arriving spike trains and encodes this information in its own output.
Initially, simple bifurcation neurons were implemented1 using a charging capacitor in parallel with a neon glow discharge lamp (to provide nonlinearity), a resistor, and a light-emitting diode (see Fig. 1). Even without an input or stimulus, the circuit had its own oscillatory behavior. The capacitor would charge up, and eventually the voltage across the lamp would reach the breakdown threshold and would ionize. This would allow current to flow through its branch of the circuit, lighting up the LED, discharging the charging capacitor to below a threshold value, putting out the lamp. The process would then begin again. To represent the output of the dendritic tree, a periodic signal in the glow-lamp arm of the circuit is used to modulate its breakdown or extinction voltage. This signal has to interact with an already complex situation.
Figure 1. This simple neural circuit can produce extremely complex outputs if a periodic signal is used to modulate the glow lamps extinction or breakdown voltages.
The effect of taking timing into account in this way is dramatic as demonstrated by Fig. 2. The horizontal axis shows the frequency of the incoming periodic signal with their phase along the vertical axis. Small changes in the input clearly produce huge shifts in the output behavior. For instance, frequencies below about 550 Hz produce periodic firing with two spikes per cycle. Just above this frequency, output rapidly changes to chaotic firing, followed by further quasi-periodic and chaotic periods. Clean lines show where the output signal is cleanly phase-locked with the input. The fuzzy areas show where order and chaos mix.
Figure 2. Diagonal lines show how the neural output tends to phase-lock with the incoming dendritic oscillation. The fuzzy areas are regions of chaos and ordered chaos, where the neural output is effectively unpredictable.
In fact the artificial bifurcation neuron that produced the results shown is much better than the simple glow-lamp circuit. The neuron was fabricated using 11 transistors and one capacitor in analog VLSI electronics, which is a precursor to a full neuronal array. Once this array has been fabricated, Farhat and his colleagues intend to interconnect the neurons optically using electron-trapping materials (ETMs). Farhat is, perhaps, best known for his collaboration with Demetri Psaltis, which produced the first ever optical neural network in the mid-1980s.
Farhat has already designed a network architecture that takes advantage of the unusual dynamics of ETMs. These are materials that contain two sets of impurities: one with an electron that is easily liberated (Eu2+) and another that provides a trap for it (Sm3+). On illumination with blue light, electrons are excited and either fall back to the Eu2+, producing orange fluorescence, or become trapped. On illumination with infrared, trapped electrons tunnel back to the Eu ions and then fall into its ground state, again producing orange light. Electron beams can also be used instead of the blue wavelength, increasing the complexity of the interaction. Normally these materials are used in a read-write series: where the IR reads out what the blue records. However, when IR and blue light illuminate the ETM simultaneously, the dynamics become very complex: especially if one or both beams are changing with time.
Because of this, Farhat determined that the ETMs could provide input and output (dendritic and synaptic) weights, nonlinearly coupling bifurcation neurons together. Ideally the neurons would be arranged monolithically on a chip, with input photodiode arrays and output LED or other modulator arrays integrated with them (see Fig. 3). The basic idea is that the ETM device in the top left arm of the system preprocesses inputs coming from the computer (left) and then feeds them into the neuron array (bottom right). The network's output (bottom left) is fed back into the network via another ETM device that provides the synaptic and dendritic responses.
Figure 3. Architecture of a large-scale optoelectronic pulsating neural network using two electron trapping material (ETM) image intensifiers. The neurons shown here are programmable unijunction transistors (PUT) that have photodiode array inputs (PDA) and light emitting diode (LED) outputs.
In a paper to be published in the Journal of Robotic and Intelligent Systems later this year, Farhat reveals that, in simulation at least, the bifurcation neurons seem to network in very useful ways. In particular they seem to form nonlinearly interacting phase-locked "netlets" or neuronal assemblies with external input, with the chaotic periods serving as noise, helping the neurons to find the right "answer." This behavior, says Farhat, is very robust at the netlet level, even if the reactions of particular neurons in the netlet are imprecise.
This behavior seems to mimic that of cortical neurons and could be harnessed to equip machines with abilities that we take for granted. For instance, initial experiments with similar networks have shown that they can accurately discriminate between objects. In the future, more sophisticated systems might enable computers to fuse complex sensor information, understand what it is "seeing," and react appropriately.
1. N. H. Farhat and M. Eldafrawy, The bifurcating neuron: characterization and dynamics, SPIE Proc. 1773, pp. 22-24, 1992.
2. N. H. Farhat and Z. Wen, Large-scale photonic neural networks with biology-like processing elements: the role of electron trapping materials, SPIE Proc. 2565, 1995.
3. N. H. Farhat and Emilio del Moral Hernandez, Recurrent networks with recursive processing elements: paradigm for dynamical computing, SPIE Proc. 2824, 1996.
Sunny Bains is a technical journalist based in Edinburgh, UK.