SPIE Startup Challenge 2015 Founding Partner - JENOPTIK Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Photonics West 2017 | Register Today

SPIE Defense + Commercial Sensing 2017 | Register Today

2017 SPIE Optics + Photonics | Call for Papers

Get Down (loaded) - SPIE Journals OPEN ACCESS

SPIE PRESS




Print PageEmail Page

Optoelectronics & Communications

100 Gb/s gets ready for prime time

Leveraging phase modulation and polarization multiplexing allows networks to send data at 100 Gb/s while operating at a much lower rate.
1 June 2011, SPIE Newsroom. DOI: 10.1117/2.2201105.01

We live in an age of information. Downloading has become the preferred method of accessing entertainment, Twitter and Facebook provide the communications infrastructure for revolutions, and worldwide almost one in three people has Internet access. It seems only a few years ago that the workhorse data rate was 10 Gb/s, with a few carriers moving to 40 Gb/s. Today, in response to unrelenting bandwidth demand, 100 Gb/s testbeds are already in operation in multiple markets.

At 10 Gb/s and below, systems leveraged binary amplitude modulation, or on-off keying (OOK), which uses a shutter on the transmit side and simple photodiode at the receiver to generate and detect a pulsed data stream. As data rates increase, everything gets more difficult, however. Chromatic dispersion and polarization-mode dispersion introduce error and shorten the distance a signal can travel without reconditioning. Jumping to 40 Gb/s, let alone 100 Gb/s, by simply doing everything faster simply isn't feasible. Instead, the industry turned to phase modulation, encoding data in the phase of the optical carrier. Although more complex, the approach can transmit data more effectively while allowing the individual components to operate at a lower overall rate.

The move to 40 Gb/s played out as something of a free-for-all, with a chaotic and rapid proliferation of modulation schemes that ultimately stymied development (see sidebar). Having learned its lesson, the industry came to a consensus on hundred Gb/s modulation schemes early in the process, settling on dual-polarization quadrature phase-shift keying (DP-QPSK). In DP-QPSK, independent data transmission takes place on two orthogonal polarizations. As a result, each polarization only needs to transmit at 50 Gb/s. The reduced data rate narrows the optical bandwidth required to send the signal, allowing more optical channels in the same spectral band.

Quadrature phase shift keying involves representing the data in the complex plane, so that each symbol consists of a real (in-phase) and an imaginary (quadrature phase) part, as shown in the Euler identities:

and

As a result, each symbol in complex space, shown here in the signal-phase diagram, or constellation diagram, can represent two bits of data (see figure 1):

Figure 1. In DP-QPSK, each symbol in complex space consists of a real (in phase) portion and an imaginary (quadrature) portion.

That means that the baud rate, or number of symbols per second, can actually be half of the bit rate. If you add in polarization multiplexing, then a system can actually achieve a bit rate four times that of its baud rate.

For practical purposes, the process involves modulating two optical signals in parallel: one to represent the in-phase portion of the symbol, and one to represent the quadrature-phase portion. After detection, the real voltages are reassigned to their respective types -- in phase, or quadrature phase -- so that the data can be recovered.

Of course, it's not quite as simple as that. Amplitude modulation only requires detection of the signal amplitude, provided by standard photodetectors. DP-QPSK is based on detection of the optical phase of the signal, which necessitates the use of coherent detection (see figure 2). In coherent detection, the output of a local oscillator operating at a similar wavelength is mixed with the incoming signal at the receiving end, providing the ability recover the optical phase of the signal using an array of four photodetectors. More important, digital postprocessing techniques compensate for distortion and error introduced by the transmission process.


Figure 2. In coherent detection, a beamsplitter separates the polarization multiplexed signals (left), which are mixed with output from a local oscillator (center), and processed to yield the final signal (right).

Signal processing challenges

For the photonic community, the approach represents something of a paradigm shift. In previous generations, the focus has been on preserving signal strength and minimizing bit error rate. With DP-QPSK, impairment-free propagation assumes much less importance because distortion of the signal can be removed during post processing. "Previously, we had to carefully design the transmission system to avoid impairments and distortion," says Brandon Collings, chief technology officer for communications and Commercial Optical Products at JDSU (San Jose, CA). "Now we can be much more cavalier and reverse a lot of these errors numerically after the receiver because we're able to digitize the complete signal coming in."

It's a powerful approach but also computationally intensive. In recent years, field-programmable gate arrays (FPGAs) have emerged as efficient, economical alternatives to application-specific integrated circuits (ASICs) as dedicated microprocessors for small-volume applications. FPGAs can't keep up with the processing required by DP-QPSK, however. The obvious solution is for vendors to go with an ASIC, but here, too, challenges emerge.

For years, the optical communications industry was able to leverage older CMOS process tools to fabricate their microprocessors. That brought the benefits of working with a well established, well understood technology combined with the economies of using process tools no longer in demand. With the shift to DP-QPSK, that model no longer holds. The ASICs used for DP-QSK require cutting edge CMOS processes. As a result, manufacturers find themselves competing for foundry time and paying more for the chips.

"It's just a huge upfront cost and a long process to develop them and get them tested in multiple iterations," says Collings. "It's not something that a smaller outfit is going to take on. One of the biggest challenges to the industry is just to figure out how we can have this ecosystem that allows differentiation but which will be good for volumes and diluting upfront development costs." Low standardization requires more individual investment but enables more differentiation. Heavy standardization reduces differentiation but decreases upfront investment. "I think it's a technical challenge but more importantly it may be just an economic and commercial challenge," he adds. "What is the right ecosystem situation to have for that particular part?"

Component challenges

DP-QPSK brings great benefits but it also adds components at both the transmit and receive end of the network, increasing loss. Gone are the days when a beam just needed to pass through a binary shutter. Now, the output of the laser passes through a beamsplitter, then each subsidiary beam travels through a modulator which impresses the data stream on it by adding phase delays. Next, one beam passes through a polarization rotator, then the two beams get recombined and launched into the transmission fiber.

The industry is working to balance the performance, however. Modulators are more integrated than ever before, with the entire process taking place on a single chip, and similar integration has been done for the receiver. "You have more optical losses, but because you have more functions built into the chips, you don't have as many couplings in and out of components as previously," says Per Hansen, vice president of product marketing for modules at Oclaro (San Jose, CA). "This helps greatly to keep the loss in check." In addition, the change in modulation schemes itself has brought benefits."With coherent technology, you are actually lowering the optical power required in the system," he continues. "The optimum launch power for a coherent signal is generally lower than for your conventional intensity-modulated signal. Operating at lower powers -- in a more linear regime -- has advantages for electronic compensation of propagation impairments. Also, it allows amplifiers with a given total output power to support a higher number of channels."

When all else fails, add muscle. "You compensate for loss by requiring the tunable laser to go to a higher power," says Robert Blum, Oclaro's product marketing director for components. "Typically, you would see a tunable laser in a 10 Gb/s system operate at 13 dBm output power. Now, people are asking for maybe 16 dBm."

Most transmission links utilize a higher data rate relative to the the base data (or information) rate. The additional speed, known as overhead, provides bandwidth for forward-error correction which offsets signal transmission limitations, enabling greater link distances. In general, the greater the overhead, the greater the error-correcting capability and thus better link reach performance. "Applying more overhead gets difficult," says Collings. "When you have to go faster, you necessarily are going to see more errors, so somewhere in there is a diminishing return between applying more overhead and getting value out of it."

100 Gb/s requires a symbol rate of 28 GB. With DP-QPSK and polarization multiplexing, the network can send four bits per symbol, which adds up to 112 Gb/s of transmission, providing a 12% overhead that can be used to provide forward error correction.

Despite forward error correction, noise remains an issue. That has driven system integrators to turn to alternative amplification technologies such as Raman amplification, which boosts the signal with less noise.

Future focus

Currently, vendors are delivering individual components -- modulators, tunable lasers, receivers. As the industry matures, vendors will begin to sell transponders. Currently, Opnext has announced plans to sample transponders by year end with commercial product slated for release in 2012.

Even 100 Gb/s transmission is unlikely to satisfy bandwidth demands for long. Previously, data rates went up by a factor of four, primarily because architectures were dominated by voice-centric, synchronous optical networking (SONET) protocol. In the last decade, as data traffic has taken over, the network has converted to packet switching and Ethernet, distinguished by data-rate jumps of an order of magnitude at a time.

Of course, making the jump from Ethernet to 10-Gigabit Ethernet is one thing. Vaulting from 100 Gb/s to Terabit Ethernet is quite another. "It's getting harder and harder to just keep going faster and faster," says Collings. "We'll probably be at 100 Gb/s for quite a while, because the next jump is either 400 Gb/s, which is the conventional factor of four or 1 Tb, which is a factor of 10, so 100 Gb/s is probably going to be around for long while."

Kristin Lewotsky is a technology writer based in Merrimack, NH.


The lessons of 40 Gb/s

In the mid 1990s, 2.5 Mb/s technology dominated the market. Then, in 1999 (roughly), Nortel Networks caught the industry flat footed when it released a suite of 10 Gb/s components, gobbling up market share from vendors rushing to upgrade their capacity to address bandwidth demands from this newfangled thing called the World Wide Web. Suddenly, the race was on for vendors to field their own 10 Gb/s products. Meanwhile, afraid of getting caught out twice, many barged ahead into the 40 Gb/s space, despite the fact that market demand was weak, at best. Indeed activity at 40 Gb/s quieted down until around 2006, when demand caught up with the technology.

Unfortunately, 40 Gb/s technology had a hard time catching up with itself. Vendors rushed to differentiate their systems, which led to a proliferation of modulation schemes. Return-to-zero/non-return-to-zero schemes were supplanted by duobinary, which was overtaken by coherent phase-shift keying, then differential quadrature phase-shift keying, followed by DP-QPSK. As a result, hardware developed for one scheme wasn't compatible with another and performance improvements from one scheme to the next quickly made older hardware obsolete, challenging commercial business cases. "Everybody had their own format and that left a lot of people simply choosing to sit out because there was so much fragmentation," says Collings. "If you built something today, it was probably phased out tomorrow or applicable only to a handful of vendors. 40 Gb/s was a real mess."

Global demand for bandwidth continued unabated, eventually exceeding installed capacity. When the call for a faster protocol began to sound, the Optical Internetworking Forum, an industry association, took action. Rather than suffer through the same scattered development at 100 Gb/s that happened at 40 Gb/s, the OIF pushed for consensus. As a result, the industry consolidated around DP-QPSK as the path to 100 Gb/s. Vendors may have their own hardware and software solutions to achieve that goal, but everybody agrees on the modulation scheme and form factor. "There was a nice convergence between the economic realization that we needed to do something collectively to get it done, and the technology -- coherent DP-QPSK is really the natural choice," says Collings. Whether that level of focus will remain at the next data rate job, it is less clear. "At 400 Gb/s, there are more choices that make sense for different applications, so we may see less standardization, more attempts to differentiate," he notes. The prospect of a repeat of the 40 Gb/s scenario is already raising concerns. "There's certainly potential for more confusion out there. People are already starting to say, 'Let's not repeat 40. Let's figure it out today.'"