Neural networks are part processor and part memory. They use multiple inputs, filtered through layers of neurons (which add incoming signals together) and interconnection weights (which multiply the outputs of these neurons) to produce various outputs. They are particularly important to intelligent computing because they allow machine learning.
Though they are increasingly finding applications, the field of neural networks is still not mature: primarily because of the lack of good hardware to implement large networks on. Digital neurons have the disadvantage of being very big. This means you can get relatively few on a chip. They also have the disadvantage of poor interconnectivity between neurons: all connections must go through the small number of pins available on a chip package. And, though you can get many times more analog neurons on a single chip (so making each chip more functional), this actually makes the interconnection problem worse. The more neurons you have, the more pins you need for good interconnectivity and to make many chips operate efficiently as a single network. For this reason, optical neural networks have been a continuing area of investigation, and one that is likely to become increasingly important.
A recent paper from researchers at Philips Research Laboratories (Eindhoven, The Netherlands) demonstrates both the potential of using optics in neural nets, as well as the possible pitfalls.1 Their novel system makes use of two properties of lasers that are often considered to be disadvantages. The first is their sensitivity to feedback, and the second is the fact that they generally contain many different modes. How they manipulate these properties to form a neural network is shown in figure 1.
Figure 1. Schematic of a four neuron network based on manipulating the external feedback of an injection laser. Figure ©1996 IEEE.
The injection laser diode is configured to have an external cavity: everything between it and the mirror. The various (in this case, four) wavelengths of light it produces are split up by the grating and then collimated by the optics, so that they form four vertical stripes as they travel through various intensity masks. The first, the input, affects each of the laser mode stripes identically, while the weights that follow manipulate each color in a different way. Exactly what these weights should be for a particular application can either be calculated or can be found using an iterative learning process.
The multimode beam that is fed back into the laser from the external cavity has a very different ratio of spectral components than the one initially emitted. Because the laser is being run near its threshold condition, its longitudinal modes are having to compete for the available energy. Strong external feedback for a particular mode enhances its ability to take the available energy: energy it takes from those modes with less feedback. It's this nonlinearity, caused by modal competition, that allows the system to act as a neural network.
An example of the kind of results that a simple four-neuron network can produce is shown in figure 2. Four two-bit inputs were used as input masks in the external cavity of a GaAlAs injection laser. Each input has a positive bias-the lowest of the three bars is always "on"-in order to allow pseudo-negative feedback. The result consists of three single-peak output: one functionally corresponding to an exclusive OR, another to an AND, and the third to a NOR.
Figure 2. The experimental output of a GaAlAs laser-based network showing different output wavelengths representing various logical functions. Figure ©1996 IEEE.
In their paper, Philips researchers were very optimistic about the speed of their system. Their existing network allowed them to perform 10 Giga connections per second (CPS), they said, and theory suggested that up to 100 Tera CPS was possible. This represents an improvement of more than three orders of magnitude on electronic neural network performance. They also speculate on the possibility of integrating both the laser and external cavity on a single chip, using an optically addressed spatial light modulator to allow the output of a laser network to be input to another, and the possibility of manipulating a hundred laser modes in a single laser.
But though they have demonstrated an interesting device, one which may well have some of the potential that they claim for it, they have failed to answer some major questions about how these neurons will work as parts of systems. For instance, will the speed of the spatial light modulators (used as the input and weight masks) have to match the speed of the laser interaction in order to get the speeds they are projecting? How do they propose to integrate a laser, optics, and two SLMs on a single chip? Finally, how will the output from one laser gain "access" to the spatial light modulator of another given the external laser configuration?
That they haven't solved these problems says nothing bad about the quality of their work so far. It's just that there's more to designing a really good neural network than simply designing a really good neuron.
Figures courtesy IEEE from the reference below.
1. S. B. Colak, J. J. H. B. Schleipen, and C. T. H. Liedenbaum, "Neural network using longitudinal modes of an injection laser with external feedback," IEEE Transactions on Neural Networks, Volume 7, Number 6, November 1996.
Sunny Bains is a technical journalist based in Edinburgh, UK.