SPIE Digital Library Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Defense + Commercial Sensing 2017 | Call for Papers

Journal of Medical Imaging | Learn more

SPIE PRESS




Print PageEmail Page

Optoelectronics & Communications

Light Constructions - Better performance without wires: optical interconnects in computers

From OE Reports Number 158 - February 1997
31 February 1997, SPIE Newsroom. DOI: 10.1117/2.6199702.0002

Using optics to replace wires within computers and other electronic systems is an idea whose time seems to have come: we have reached the stage where our effective computing power is not so much limited by the processors we use as the links between them. The term "optical interconnects," however, can mean many different things. Implementations include both evolutionary and revolutionary systems using both freespace and waveguide optics (and everything in between). On one hand, fibers are starting to creep into essentially conventional electronic systems, replacing wires that were causing data bottlenecks. On the other, researchers are devising entirely new optical connection architectures, networks and backplanes for future generations of machine. What all these workers seem to have in common is a requirement to think cheap, resistance from electronic engineers, and a certainty that optical interconnects is inevitable.

Wire to fiber

The reason why optical interconnects are becoming necessary in the first place is that two of the basic properties of wires-resistance and capacitance (RC)-are proportional to their length. Resistance is the wire's opposition to having a current flow through it. The longer the wire is, the more power has to be expended overcoming resistance. This is one of the reasons that fiber optics has completely taken over from copper cables for telecommunications transmission lines. Capacitance, on the other hand, is the ability to store charge. In order to transmit energy down a wire, this capacity has to be filled. The wire can be compared to the water pipe under a sink. Water is poured in from the top and, initially, nothing comes out at the other end: it's all trapped in the u-bend. Eventually, however, the u-bend fills up, the water starts to flow and, after this point, all the water you pour in pushes an equal amount of water out.

The net result of all this is to slow the system down. It takes a fixed amount of time-known as the RC time constant-to overpower the resistance and capacitance of a given wire. Since there's no point in sending a second bit down the line before the first one has gotten through this barrier, this time constant places a fundamental limit on the bit rate that can be transmitted. Even worse, this maximum bit rate for a given system is determined by the worst, usually the longest, wire it contains. Beyond a certain point, therefore, there's no advantage to having increasingly fast devices. After the RC-constrained bit rate is reached, there's no way to get data in or out fast enough.

At the evolutionary end of optical interconnects, therefore, much effort is going into straightforwardly replacing wires with high-bandwidth optical links. For instance, at Honeywell Technology Center (Minneapolis, MN), work has concentrated on using conventional fibers and the optical waveguide version of the ribbon cable to transfer computer data.1 Though not necessarily very exciting, this approach has the very real advantage of fitting in with the way electrical and electronic engineers think. Rather than designing a system around optics, they can simply use optics to prevent bottlenecks. Motorola have been working along these lines too. They have developed an optical link that they trademarked as the Optobus:2 a 10-channel bidirectional link, based on multimode fiber ribbons, that can transfer 1.5 Gbits per second, per link, per direction.

Optoelectronic solutions

Lucent Technologies (Holmdel, NJ), came into the field through communications. Electronically rerouting telephone signals involves detecting the light that travels down a fiber, converting it into an electrical signal that can be routed by the switch, and then changing it back into light to continue through the network. Over the last 10 to 15 years, Lucent (formerly Bell Labs) researchers have come up with a series of self electro-optic effect devices (SEEDs) that can both switch and detect incoming optical signals. Fabricated using a gallium arsenide (GaAs) process, the optical part of the device consists basically of a multiple quantum well modulator and a photodiode. But the real advantage of the Lucent approach is that they can integrate the optical devices with standard electronics (so the silicon can do the "thinking" while the GaAs allows communication).3 Though much heralded in the '80s, the technology is still maturing. For the last few years, Lucent has been acting as a foundry, making "optical chips" for researchers across the United States and Canada. Though this has been successful in research terms, the yield on these components is still very low.

One of the interesting areas in interconnects is how the communication is actually organized. SEEDs, for instance, are often used in freespace: a laser sends a read beam to a chip's modulator, which it then switches off and on to transmit its own signal. This is then received when the photodiode of another chip detects the signal and turns it into an electronic one for further processing. A particularly neat way of implementing this kind of freespace system is being developed at the University of Texas at Austin. Instead of traveling through air, light is trapped by total internal reflection inside a sheet of glass. "Doors" into and out of this planar waveguide are provided by thin holograms that are recorded on the surface and switching can be performed using, for instance, SEEDs index matched to the sides. (Similar systems have been developed at Trinity College (Dublin, Ireland), the University of Arizona (Tucson, AZ), and what used to be Bell Labs).


Figure 1. Schematic of the University of Texas' optical bidirectional backplane bus for a microprocessor system.

Using this planar waveguide technology, UTexas researchers have demonstrated a scheme for an optical backplane that they claim is compatible with IEEE standardized buses (see figure 1).4 Multiprocessor boards are built with lasers and receivers at the bottom of each column (vertical line) of chips. The laser beam is coupled into the glass through a hologram on the backplane surface. The light then travels, bouncing off the walls of the glass, until it is received by all the other boards. A message header is used to "address" the signal, so that each board knows whether a signal is meant for it or not. The same holograms "couple" the light in and then out again. This means that their position isn't critical and they can be relatively big and, so, easy to align.

Another system for chip-to-chip and board-to-board communication is being developed at the NEC Research Institute (Princeton, NJ).5 This system is also VCSEL-based, but uses passive routing through air instead of glass. The system is based on a novel computer architecture that is designed to be reconfigurable: it can be made to operate according to the same architecture as, for example, a Cray, a Connection Machine, or almost any other parallel computer. The first optical prototype consisted of a freespace system that connected four processor boards together, each with 16 processors. Each processor is attached to four lasers, each representing a connection to one of the four boards which broadcast, via lenses, mirrors and beamsplitters, the signal to the corresponding processors on other boards (see figure 2). For instance, if processor 14 on board 1 turns on laser number 3, the signal goes to processor 14 on board 3. If necessary, a message header can then route the data electronically to another board 3 processor.


Figure 2. NEC's optical system for connecting 64 processors spread out over four boards.

A more radical solution, using both fibers and freespace, has emerged from the University of Delft in the Netherlands.6 Abandoning the conventional idea of using message-passing networks, this system is based on every processor talking to every other processor. This works by each sending out information through an array of light-emitting diodes, one for each bit. This light is then captured by a polymer fiber optic array and connected to a central node known as the Kaleidoscope. The fibers are organized so that their relative positions at the output are the same as at the input: effectively making the data into a 2D image. In the Kaleidoscope, the fiber bundles are tiled to make one large 2D image, the size of which is dependent on the number of bits output by each processor and the number of processors in the array. Using a faceted mirror and lens system, the light from the entire image is broadcast onto separate locations for each processor (see figure 3), coupled into a second fiber bundle, and then received so that the processor can access the data electronically.


Figure 3. University of Delft's Kaleidoscope prototype for imaging combined fiber inputs from all parallel processors for broadcast.
Assessing potential

Each of these systems has its advantages and disadvantages. The wire replacement strategy may turn out to be a transition technology, for instance, allowing engineers to get their "feet wet" with optics before they take the real plunge with something more daring. It is, however, more likely to fulfill real needs in the short term. SEEDs are likely to have some place in technology whatever happens, and have been extremely helpful in the development of the whole field, but they are still not practical for everyday systems. Of the various architectures available, the planar waveguide structure seems to have advantages of sturdiness that freespace doesn't, but how the waveguide is used is as important as the fact that it's used. Finally, the fully connected system has cost (the number of fibers is equal to the square of the number of processors times the number of bits per word) and alignment problems that will only be worth fixing if fully interconnected processors really are necessary. We shall see.

References

Please note that the following are just examples of papers in the area of optical interconnects that the groups mentioned have produced. There are many, often more recent, papers available.

1. J. Bristow, "Intra computer optical interconnects: progress and challenges," Optical Interconnects and Packaging, SPIE CR62.

2. D. B. Schwartz, et al, "A low-cost high-performance optical interconnect," IEEE Trans. on Components, Packaging, and Manufacturing Technology, Part B, Vol. 19, No. 3, August 1996.

3. A. L. Lentine, et al, IEEE Photon. Tech. Lett. 8, Feb 1996.

4. S. Natarajan, et al, "Bi-directional optical backplane bus for general purpose multi-processor board-to board optoelectronic interconnects," J. Lightwave Tech., Vol 13, No. 6, June 1995.

5. S. Araki, et al, "Experimental free-space optical network for massively parallel computers," Applied Optics, Vol. 35, No. 8, 10 March 1996.

6. E. E. E. Frietman, et al, "Parallel optical interconnects: implementation of optoelectronics in multiprocessor architectures," Applied Optics, Vol. 29, No. 8, 10 March 1990.


Sunny Bains
Sunny Bains is a technical journalist based in Edinburgh, UK.