SPIE Digital Library Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers


SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail Page

Optoelectronics & Communications

All-optical networks may one day form national backbone

An interview with Rajiv Ramaswami, Tellabs (Lisle, IL) and Chunming Qiao, SUNY (Buffalo, NY)

From OE Reports Number 188 - August 1999
31 August 1999, SPIE Newsroom. DOI: 10.1117/2.6199908.0002
Ramaswami and Qiao were interviewed for OEReports by Frederick Su.

Figure 1. (a) Wavelength division multiplexing showing wavelengths being diffracted into a fiber and then diffracted out the other end. (b) A simple source can be the spectral slicing of a low-cost LED. The photo shows a partial cross-section of a single-mode fiber array and the graph shows the emission spectra of the LED and the passbands in the slices.

Ramaswami: Fundamentally, two things have improved. You're getting a combination of more wavelengths on the same fiber, which is WDM (wavelength division multiplexing, Figure 1), and also higher speeds or higher bit rates per wavelength.

In terms of WDM, in 1987 we could buy a bulk optic grating that could multiplex 20 channels onto a single fiber spaced 1 nm apart. Today, we can get 128 channels, each spaced 0.4 nm apart, through a 1550-nm window of a glass fiber.


Figure 2. (a) An arrayed waveguide grating multiplexer made using waveguides etched on silica on silicon. Light from the input waveguides are spread out by the input coupler into the arrayed waveguides, and then coupled back to the output waveguides. Each waveguide in the middle section has a different pathlength. The wavelength of the light signal determines the interference between the signals coming out of the arrayed waveguides in the output coupler, and determines the output port that the signal exits on.

Figure 2. (b) A wavelength multiplexer/demultiplexer using multilayer dielectric thin-film filters.

Figure 2. (c) Optical add/drop elements based on fiber Bragg gratings. The circulator routes all wavelengths from left to right. (i) A drop element. l2, carrying data, is reflected by the Bragg grating and is dropped. (ii) A combined add/drop element. In this case, in addition to what has occurred in (i), l2 is added back to the network by a coupler. The added l2 represents new data carried over the same physical wavelength.

For multiplexing and demultiplexing, there are really three technologies that are being used. One is called arrayed waveguide gratings (Figure 2a). These are waveguides etched onto silica or silicon substrates and we can typically get 30 to 64 channels. To get to 128 or an even larger number, you would combine such devices.

The second technology is a dielectric thin film filter (Figure 2b). You have a piece of glass where you put on multiple layers of thin films, which act as complex Fabry-Perot filters.

The third one is fiber Bragg gratings (Figure 2c). So, what you do here is take a photosensitive fiber and create a refractive index variation in the fiber by exposing it to interfering beams of light. That then works as an in-band, in-fiber grating to give you channel selectivity.

In terms of speed, when you talked to Charlie Kao in 1987, most of the transmission was probably running at 620 megabits per second to about a gigabit per second. Today, you're seeing transmissions run at 2.5 to 10 gigabits per second.

What's improved, fundamentally, is the commercial availability of electronics and optics to handle 10 gigabits per second, with 40 gigabits per second in the research lab. It's not the laser, but the modulators, how you turn the lasers on and off. First, you can do it by electronically modulating the current into the laser, which is fine at moderate bit rates, up to 2 gigabits per second, or over short distances, up to a couple of hundred kilometers.

For several hundred kilometers to a thousand kilometers and/or if you want to transmit at 10 gigabits per second, then you have to use an external modulator to turn the light on and off, which is a separate device that sits in front of the laser.

One way is to use a lithium niobate modulator. By applying a varying voltage, you change the refractive index of the lithium niobate crystal. Your data comes in as ones and zeros and you use that to drive the voltage between two states, high and low. The voltage applied causes a refractive index change, which in turn causes the input light either to be coupled or not coupled to the output by a directional change. The modulation of the voltage is done by electronics.

You said there were two methods of modulation.

Ramaswami: Yes, lithium niobate is probably the best way to do this, but it's also the most expensive because it is a separate device from the laser and cannot be packaged together with the laser. People are already using electro-absorption (EA) modulators, which are available at 2.5 gigabits per second and are now just beginning to come out at 10 gbits/sec. The advantage is that the fabrication techniques for semiconductor EA modulators is fairly similar to that of semiconductor lasers and, therefore, it can be integrated with the laser in the same package, so you don't need a separate device.

EA modulators rely on a particular absorption effect in semiconductor material where an applied voltage (or electric field) causes incoming light energy to be absorbed.

We hear a lot about switches and routers for the Internet. But what exactly do they do?

Ramaswami: An optical switch is the basic element that goes into an optical crossconnect or an optical router. The simplest switch is an on-off device, but larger NXN switches can be built that switch a signal from one input port to another output port.

At a network element level, I like to think of these things as either a router or a crossconnect. The big difference between the two is that a cross-connect is really a circuit switch device. You don't switch packets around; you're switching connections or lightpaths.

A router, on the other hand, switches data packets instead of circuits. What's the difference between circuit switching and packet switching? A good example of a circuit-switched network is the telephone network. Once you make a call, you get a dedicated circuit (running at 64 kbits/sec) that is yours to use until you hang up. The Internet, on the other hand, is a packet- switched network, where you don't get dedicated bandwidth allocated to you. The data you send is broken up into small "packets" that are routed separately through the network and assembled back at the other end in the correct order. This makes more efficient use of available bandwidth. Because data traffic tends to be bursty (i.e., not always present), more traffic can be carried over that bandwidth. The router operates on these small chunks of data, switching them to different ports.

Now, all of these devices switch data from one port to another. It is really the time scale at which this switching occurs that distinguishes the devices. A router has other elements inside it -- for example, a buffer. A buffer is needed when, at the same time, on two inputs, you have packets destined for the same output. Packets on ports 1 and 2 both want to go to port 4. So, one of those packets will have to wait while the other one is switched to port 4. On the other hand, for a crossconnect, you don't need buffering because the switch was thrown at the time the connection was set up and will remain in that state until the connection is taken down. You don't have data coming in on multiple ports wanting to go out to the same port afterwards.

So, at present, telecoms are not using all-optical?

Ramaswami: No, they switch the optical signal to an electronic one and then back to an optical one. The optical signal is received by a photodetector, which translates the photons to electrons in proportion to the optical power. Then the electronic signal is acted on -- amplified, for example -- and then converted back to an optical signal by driving a laser.

What kind of laser? And how far can an optical signal go before it has to be amplified?

Ramaswami: It's typically a 10-mW semiconductor laser. As for how far a signal can go, it very much depends on how you design your system. A typical number is 80 kilometers with a loss of around 20-24 dB.


Figure 3. Principle of operation of an erbium-doped fiber amplifier. There are 3 energy levels, E1, E2, and E3 in erbium-doped silica glass. A 980-nm semiconductor laser (typically around 100 mW) pumps the Erbium atoms to E3, where the electrons quickly drop down to E2. An incoming photon in a light signal stimulates the electron to drop down from the E2 to E1 state, emitting another photon in the process, which in turn causes more photons to be emitted as it travels down the erbium-doped fiber, thus amplifying the signal.

Then you amplify it. Is this done electronically or optically?

Ramaswami: It used to be done electronically. But probably for the last five years, it's been done optically with an erbium-doped fiber amplifier. It's a piece of fiber that is doped with the rare-earth element, erbium. When you pump it using another semiconductor laser at the right wavelength, the atoms inside this fiber go to an excited state. Now, when you get light coming in from your line that needs to be amplified, these atoms drop back to their ground state, emitting more photons (Figure 3).

At the same frequency as the frequencies that have to be amplified?

Ramaswami: Exactly, that's the magic. When you pump this thing, the erbium atoms go to an excited state, remain there, and when they transition back, that energy gap between the ground state and excited state happens to line up nicely with the frequencies of the signals you have to amplify.

It's just a pure coincidence, one of the amazing discoveries in nature that we could find an element, erbium, that you could dope the fiber with, that had a transition that corresponded with 1550 nm, the window in the fiber with the lowest loss.

The great thing about the erbium amplifier is that it is fairly broadband. There are really two bands being used today. There's the C band erbium amplifier, which amplifies from 1530 to 1560 nm, and the L band amplifier that amplifies from 1560 to 1610 nm.

Now, erbium doesn't distinguish among the channels. It amplifies over the entire band. You have to worry about distinguishing the channels only at the end when you separate them with a demultiplexer.

There is more than just device speed to get the high speed, correct? There are architecture and protocol.

Qiao: Right. Most, if not all, of the applications over the Internet are based on TCP/IP. TCP stands for Transmission Control Protocol. It's basically a layer 4 or Transport Layer protocol for sending a message from one end host to another end host and doing it reliably. IP stands for Internet Protocol. It is basically a Networking Layer protocol whose main function is routing. Together, TCP on top of IP forms a core protocol suite.


Figure 4. An example of a simple TCP/IP equivalent network protocol architecture consisting of 5 layers. Each layer uses the services provided by the one below and provides services to the one above through its interfaces.

Perhaps you had better explain the layering of these different types of protocols and layering in general.

Qiao: A network protocol stack or architecture consists of multiple layers (Figure 4), and each layer is just a logical entity, containing a protocol (or protocols) that performs certain functions. The software is partitioned into layers for modular purposes. Only the physical medium (i.e., the fiber, a radio link) is real.

Each layer utilizes the services provided by the one below and provides services to the one above it through its interfaces. For example, the Application Layer (layer 5) has a message (file) to send. The Transport Layer (layer 4) breaks it into multiple units, adds sequence numbers (IDs) among other things, for retransmission/reassembling purposes, and gives them to the Network Layer (layer 3), which adds its own headers (such as network addresses) and decides the next hop to take. So the Network Layer routes the packet via source/destination addresses. Then, the Data Link Layer (layer 2) adds its own header (for error detection/correction) and transmits the packet to the next hop. It takes care of point-to-point (link) transmission, or in the case of a shared medium (such as a bus/cable), decides who gets to transmit next (this is called medium access control or MAC). The Physical Layer (layer 1), consists of the physical method of transmission and includes specifying the bit-encoding format. In the Transport Layer, people usually only talk about TCP. In fact, the Internet uses two transport protocols. There is TCP, which is a reliable transmission protocol. In other words, if you send something, you will wait for acknowledgment. If you don't get acknowledgment within a set period of time, you will retransmit. So you can be sure that the other end receives your message.

There's another transport layer protocol called UDP, or user datagram protocol, which is a protocol for unreliable transmissions, meaning if you send some data, you don't wait for acknowledgment. So if the data is lost, you don't do a retransmission.

E-mail uses UDP. That's why, sometimes, you may lose e-mails. UDP is useful for these types of applications where you don't need confirmation or where confirmation doesn't make sense. For example, it doesn't make sense to retransmit a live video frame or audio packet because the delays would be too long. By the time the retransmitted data arrives at the other end, it no longer makes sense.

How does your work fit in here?

Qiao: My work is in all-optical network architecture, protocol, and control management issues. What we've been doing, for example, is looking at a wavelength division multiplexed ring or mesh, and for a given set of connections or paths that a customer requires, how do you assign wavelengths so that the cost is at a minimum? This cost could be in terms of bandwidth usage, delay, equipment or devices involved, so on and so forth.

Another area, which is related, is looking at the evolution of an all-optical network from the current, mainly SONET-based, network that simply uses fiber links for transmission.


Figure 5. (a) SONET Add/Drop Multiplexers are where data enter, leave, or are routed through the network. They are usually arranged in a ring configuration. Data can enter through an IP router, ATM switch, digital cross-connect system (DCS), or from another SONET ring. An OC-48 ring transmits at 2.5 Gbits/sec and can connect to comparable or lower speed OC rings via DCSs. The OC rings are made of optical fibers, but the S-ADMs do optical/electronic/optical conversions. Data are routed to specific S-ADMs via the use of SONET frames.

Figure 5. (b) SONET has a specific way of framing the data sent between S-ADMs. The basic SONET frame has 9x90 bytes, which is sent in 125 ms (this is the OC-1 rate, which is about 51.8 Mbps). The first 3 columns contain the section overhead (a section connects a multiplexer and repeater) and the line overhead (a line connects two multiplexers). The next 87 columns may contain the Synchronous Payload Envelope or SPE. The SPE (9x87 bytes) can begin anywhere (and hence may spill over to another frame) with a pointer to the first byte contained in the line overhead. The first column of SPE is the path overhead (a path connects the source and destination multiplexers). The other 86 columns are for user data. All three overheads contain control signals for operations, administration, and maintenance including bytes used to synchronize the beginning of the frame and to do parity-checking.

What is a SONET network?

Qiao: SONET stands for Synchronized Optical NETwork. It's an industrial standard for carrying mainly voice traffic and, hence, is like circuit-switching. It's basically a protocol that runs at the physical layer and uses a specific framing structure to provide timing information and other management functionalities such as fiber-cut protection and restoration. SONET switches or add/drop multiplexers (ADMs) are often arranged as a ring (Figure 5), but you can have rings of rings, or meshes, too. SONET uses digital switches (just like ordinary switches), but what's unique is its frame format (Figure 5b), a structure that has control information plus a specific way of framing data.

A typical high bit rate for SONET is OC48, where OC stands for optical carrier and is basically 2.5 gigabits per second. You can multiplex (in the time division) 16 OC3 streams to get a OC48 stream. Here, OC-3 is 155 megabits per second, which is a standard rate for ATM or asynchronous transfer mode. ATM is similar to packet switching, except that (1) it uses small (53 byte), fixed-length packets called "cells" to facilitate hardware implementation and (2) it determines the route (called virtual ciruit) before it sends out the cell. The difference between this and circuit switching is that no bandwidth on the ATM route is reserved.


Figure 6. Normally, data packets carry only source/destination (IP) addresses, without any label. So, they all have to be processed by the IP routing software to determine the next hop toward their final destination. In multi-protocol label switching (MPLS), (a) the first few packets travel from switch to router back to switch (solid line) to reach their destination. The destination address sends a signal upstream to the source (dashed lines) to set up the label process so that all subsequent data packets can bypass the routers (b). MPLS can either be done electronically or optically.

So, most of the new technology is based on some form of packet switching?

Qiao: Certainly, packet switching can increase bandwidth utilization via statistical multiplexing if the traffic is bursty, but it is difficult to bound the delay and jitter experienced by the packets, especially in all-optical networks where you don't have an optical buffer that is the equivalent of random access memory (RAM) used in an electronic packet switch or router. ATM, and the recently proposed multi-protocol label switching or MPLS, and optical burst switching or OBS are some variations of these two concepts, i.e., circuit and packet switching.

In MPLS (Figure 6), one possible scenario involves routing the first few packets (belonging to the same file, for example) by the Network Layer routing protocol (most likely IP), which establishes a switching table (incoming port/label, outgoing port/label) at each node (usually defined in the backward direction, i.e. from the destination back to the source). Subsequent packets (belonging to the same file) carry a label that then defines its switch destination.


Figure 7. In optical burst switching, the source will send a control packet and then a data burst. The control packet will go through electronic routing (upper dashed lines) to set up the underlying WDM switches for the all-optical burst (solid line).

The switching label can also be set up by the network prior to (or without) using the first few packets. In OBS (Figure 7), a control packet is sent first, which will be processed electronically to determine the next hop. It sets up the optical switches so that the following data burst (e.g., several packets) can go through the switches while remaining in the optical domain, without having to through an optical to electronic back to optical conversion.

When you talk about all-optical networking, are you talking about point-to-point, house-to-house? Or are you just going to the curb for most people?

Qiao: Actually, all-optical networking is mainly for the metro area and national backbones. You have access networks such as DSL (digital subscriber line -- broadband over conventional phone lines) and cable that connect home users to a backbone network that goes, say, city to city. There's a lot of research on all-optical access networks, but I guess, in the near future, the cost factor will prevent fibers from going into the homes. Either hybrid fiber coax or fiber to the curb will be more economical.

In an optical network, once an optical signal reaches a node, it will be converted to an electronic one, which may be buffered, processed, and then converted to an optical signal again and finally transmitted to the next node. But an all-optical network will keep the data in the optical form throughout from San Francisco to New York, even if it goes through some intermediate nodes. So, optical switches and routers have to be used instead of electronic ones. That presents many challenges, not just in the device/component/transmission subsystems design, but also in terms of network architecture and protocol design.

I should distinguish between the terms all-optical networks and photonic networks. A photonic network means the data will stay in the optical domain but some of the control will be done by electronics. If you say all-optical networks it may create an impression that even the control is all optics. That's not my definition. Even though we say that it's an all-optical network, what we really mean is that the data will remain in optics throughout, but the control will be electronic. So, in that sense, it's a photonic network.

What do you see for the future?

Qiao: An all-optical network in the backbone certainly makes sense because it is transparent to bit rate, coding format, and upper layer protocol. Therefore, it is kind of future proof. I think it's coming, given the continued advances in WDM, switching, and optical amplifier technologies.

In the access network, we will probably see some kind of all-optical solution for metropolitan area networks or maybe even local area networks, but cable modems, DSL, and other technologies will probably connect the last mile to the home.



Rajiv Ramaswami earned a doctorate in Electrical Engineering and Computer Science from the University of California at Berkeley. He has published articles about several areas of optical networking and, from 1989 to 1996, led a group at IBM Research that developed an early commercial multiwavelength optical fiber transmission system. He now leads a group at Tellabs developing optical networking products. He is also an adjunct faculty member at Columbia University. Dr. Ramaswami currently serves as an editor for the IEEE Journal of Selected Areas in Communication, and as an editor-at-large for optical communication topics for the IEEE Communications Society. In the past he has served as an editor for the IEEE/ACM Transactions on Networking. He was the technical program co-chair for the 1998 Optical Fiber Communication conference and will be the general co-chair for 2000. He is a recepient of the W. R. G. Baker and W. R. Bennett prize paper awards from the IEEE and was awarded an outstanding innovation award from IBM.

Chunming Qiao earned an Andrew-Mellon Fellowship for distinguished doctoral candidates and then received his PhD in 1993. Dr. Qiao's research on optical networks has been funded by the U.S. National Science Foundation (NSF) and Telcordia Technology Inc. (formerly Bellcore). He has published many technical papers and co-chaired the All-optical Networking conference sponsored by SPIE since 1997. He is also an editor of the Journal on High-Speed Networks (JHSN) and a new magazine on optical networking co-published by SPIE and Baltzer Science.

Frederic Su is a writer based in Bellingham, WA.