Extreme UV for system-on-chip: Does it really help?

Considering chip design challenges is critical in accurately quantifying the potential benefit of extreme UV.
16 December 2015
Greg Yeric

As we get closer to an actual extreme ultraviolet (EUV) insertion point, it is timely to move beyond the basic metrics of pitch and wafers per hour to consider the ultimate system-on-chip (SoC) metrics:  PPAC (power, performance, area, and cost), and how lithography mixes with other concerns to produce the final results.  In so doing, we find pluses and minuses that might nudge the value proposition one way or the other.

This is not as straightforward as one might imagine, as we are well into the era where these product scaling metrics are critically challenged with further pitch reduction. One example is the increasing influence of parasitic resistance and capacitance, both in the transistor and in the wires. Along these lines, consider a case using data from a paper by Frederick1where larger gate pitches can result in smaller implemented chips (see Figure 1). The key point to focus on is that at the left side of the graph (in the low performance regime) the block area tracks the four gate pitches, as you would expect. However, as frequency targets are increased, superior transistor performance begins to become important. In this case, above 1.75GHz the largest gate pitch produces the smallest chips! The overall chip area is a combination of transistor characteristics and wiring capabilities, and the two should be considered together when defining a process technology.


Figure 1. Implemented core area as a function of performance, for four gate pitch (P) options.1

Another key scaling challenge is variability. If designers are presented with larger variability, design margins increase, and those larger margins translate into larger, more power-hungry chips. The circuits that are most affected by local variability are the memories. Scaling the minimum area SRAM (static random access memory) bitcells (typically the ones bandied about in public marketing materials) from 28 to 20 to 16/14 to 10 can take them from relatively normal operation to non-functionality. This does not mean that we cannot make embedded memory anymore, just that designers need to choose one of three actions: use a bitcell that has larger transistors, use a bitcell that contains more transistors (eight transistors instead of six is a popular option), or wrap ‘assist’ circuitry around the memory arrays, which helps them overcome their native variability limitations. Any of these options will add area and/or power, but they might end up limiting the speed of the chip as well. The first two options are depicted in Figure 2. With FinFETs, the minimum area bitcell will have one ‘fin’ (green) per transistor and is referred to as a ‘111’ bitcell. A more stable bitcell might selectively increase the drive in some of the transistors (a ‘122’ cell is shown). On the right, a 6T bitcell (top) might be unstable, whereas adding two more transistors may solve this problem. In this case, it would mean splitting the word line ‘wl’ into separate read and write word lines (rwl and wwl, respectively). These choices are common in today's technologies.


Figure 2. Examples of SRAM (static random access memory) bitcell options. Increased transistor drive (number of fins) on the left. Increased number of transistors (T)—eight vs. six—on the right. The 6T bitcell contains (R)ight and (L)eft versions of pass gate (PG), pull up (PU), and pull down (PD) transistors, which are connected to the word lines (WL) and bit lines (BL), and negative bit lines (NBL). 8T bitcells add two more transistors to distinguish read bit lines (RBL) from not-read bit lines (NRBL), to reduce the possibility of cell disturbs.

Historically, the two largest contributors to random variability in transistors have been the dopant fluctuations (reduced as noted above) and line edge roughness (LER) of the gate. FinFET implementations have reduced the dopant component of random fluctuations. In addition, with FinFETs adding fin edge roughness variability to gate etch roughness, reducing LER in general will become more important. This will be a key issue to monitor in the development of the EUV ecosystem, as source power and resist improvements will be needed to improve LER.

There are many issues that will dilute the final product metrics, independent of the pitch scaling we enable. Transistors, their contacts, and the vias and wires will all need to have fundamental improvements in order to fully take advantage of the pitch scaling that EUV may offer. A pitch reduction by x% no longer implies that chip area will be reduced by x,2 and the SoC implementation realities that I discussed here will factor into the realizable benefit provided.

Transitioning to more specific lithography topics, it is instructive to compare the ‘with-EUV’ option to the ‘without EUV’ option (multiple patterning). Either of the two main without EUV options—litho-etch-litho-etch (LELE) multiple patterning or self-aligned double patterning (SADP)—will cause implemented chip area to increase over an ideal (simpler) lithography process. I have described elsewhere some of the wire variability and wire routing aspects involved.3

EUV should help with some of the key constructs used to create low power (i.e., small) standard cell logic. A key area is in the local interconnect that is used to wire transistors under the M1 (first metal layer). This leads to an interesting point, quantified in a later paper by Lars Liebmann of IBM,2 that low-power designs will likely benefit more from EUV than higher-performance designs, as the higher-performance designs can use larger standard cells that would not put as much pressure on local patterning. Another possible benefit ties back to the parasitics that I discussed as key scaling limiters above: we added the local interconnect to make up for patterning limitations of 193i (shorthand for deep-UV steppers using immersion technology) in these small standard cells, and it is plausible that EUV might allow us to simplify the local interconnect, saving wafer cost but perhaps also importantly reducing wiring parasitics. This would reduce the area and power of chip implementations, all other things being equal.

That may seem like a lot of emphasis on the wires, but many designs, especially low-power designs, are limited by the wiring, and increasingly much of the wiring density limitations come not from simple line/space but from variability and the line end and associated via rules. Either way we choose to do multiple patterning, we will limit the ability to scale power, performance and/or area (PPA) of our designs, and this is another issue that should be understood when comparing EUV to non-EUV options. An interesting potential benefit with EUV is simplicity of design rules translating into faster place and route cycles, which is an iterative process.

No one would dispute the benefit that an ideal EUV capability would bring to the industry. But as EUV closes in on its pitch and throughput targets and approaches viability, we must consider the practical design aspects in order to accurately quantify the potential benefit of EUV. These considerations tie EUV to the overall process. Unfortunately, these comingled design-technology scaling questions are not easy to answer. Ideally they require a full process design kit (PDK), with transistor models, parasitic extraction models, and wiring design rules, and fully considered implementations of mock designs in order to benchmark the PPA results. Furthermore, there won't be one right answer—recall the example of Figure 1—low-power and high-performance designs will likely arrive at different value assessments for EUV, which will add complexity to the choices the foundries will have to make regarding the timing and specific process layers for EUV insertion.


Greg Yeric
ARM
Austin, TX

Greg Yeric is a senior principal research engineer at ARM Holdings in their Austin, Texas, office. His research focuses on future design-technology interactions and co-optimization.


References:
1. M. Frederick Jr., Poly pitch and standard cell co-optimization below 28nm, Int'l Electron Devices Mtg. (IEDM) , p. 12.7.1-12.7.4., 2014.
2. L. Liebmann, The daunting complexity of scaling to 7NM without EUV: pushing DTCO to the extreme, Proc. SPIE 9427, p. 942702, 2015. doi:10.1117/12.2175509
3. G. Yeric, SOC for EUV: Does it really help? Presented at SPIE Advanced Lithography 2016.
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research