SPIE Membership Get updates from SPIE Newsroom
  • Newsroom Home
  • Astronomy
  • Biomedical Optics & Medical Imaging
  • Defense & Security
  • Electronic Imaging & Signal Processing
  • Illumination & Displays
  • Lasers & Sources
  • Micro/Nano Lithography
  • Nanotechnology
  • Optical Design & Engineering
  • Optoelectronics & Communications
  • Remote Sensing
  • Sensing & Measurement
  • Solar & Alternative Energy
  • Sign up for Newsroom E-Alerts
  • Information for:
    Advertisers
SPIE Journals OPEN ACCESS

SPIE PRESS

Print PageEmail Page

Micro/Nano Lithography

For Manufacturability, Optimize Everything

Design, process, and technology need to be optimized together.

1 March 2019, SPIE Newsroom. DOI:

Advanced Lithography 2019

For semiconductors, the key to future success and continuing progress is to optimize everything - design, process, and technology - at the same time. That, presenters said, gets around the challenges of building ever smaller features while also having to cut power consumption and up performance.

On Wednesday, 27 February, Peter Weckx of IMEC presented a paper about the difficulties facing devices at the 3-nm technology node and smaller. Today's most advanced commercial devices are at the 7-nm technology node, with considerably smaller features than the 14-nm node that was the state-of-the-art a few years ago.

But not everything in these devices is shrinking at the same rate. That leads to a performance bottleneck, which together with manufacturing issues and other challenges will cause real problems at about the 5-nm node and below. One solution is to go vertical, as that allows the lithography of different layers to be more independent of each other.

The question then becomes how best to do that, said Weckx. "What kind of devices can exploit the third dimension?" he asked.

The answer, researchers at IMEC contend, is to change the design, process, and technology -- allowing all three to be optimized together. For instance, chips often have a memory array to store data that must be accessed frequently or quickly, logic components to perform operations on that data, and still other elements that get the data between storage and logic. One solution is to put the memory array on a top level and all logic elements below on another level.

This, Weckx said, makes the chip smaller, faster, and more energy efficient -- cutting energy consumption as much as 10%. The size reduction happens because the different sections are stacked atop one another instead of being adjacent. As for the performance improvements, the shorter metal runs mean data travels smaller distances, saving time and power.

Speaking of power, one consequence of this scheme is that the conduits distributing electricity around the chip must be buried. So, the IMEC researchers took this concept to its logical conclusion. The wafer could be flipped over, ground down, and then power buses, metal traces, and circuitry fabricated on what was previously unused real estate, Weckx said.

"Once we open up the back side, we can find a lot of 3D integration," he said.

A different type of integration and co-optimization was the focus of a presentation from Hsiang-Lan Lung of Macronix, who spoke on Thursday, 28 February. In this talk, a keynote on the last day of the symposium, Lung discussed artificial neural networks and deep learning, tying these to in-memory computing. All three are hot topics that have attracted considerable attention, Lung said.

Artificial neural networks, like the natural ones found in all people and animals, are good at recognizing things and learning how to do so. That makes artificial neural networks powerful tools that can be used to spot defects, detect problems in traffic, diagnose disease, and more.

Implementing an artificial neural network places certain demands on devices. "It involves a lot of memory access," Lung said.

There are also many multiplication and addition operations, he added. This combination makes certain chips better at neural net tasks than others. A CPU, for example, is not as good as a more specialized GPU, which because of its design can do the math operations in parallel. There are also other still more specialized chip types being investigated that should be even better.

The best solution is an in-memory computation chip, Lung said. In these devices, the memory array both stores the data and mathematically manipulates it. This is possible because the math operations are simple. The result of doing everything in a single chip will be neural nets that both learn and classify rapidly while being as small and consuming as little energy as possible.

The ideal in-memory computing chip cannot be built today. It may be possible to do so in the future with further advances, Lung said. These could well involve simultaneous optimization of design, process, and technology.

Hank Hogan is a science writer based in Reno, Nevada.

See more news and highlights from SPIE Advanced Lithography.