Show all abstracts
View Session
- Front Matter: Volume 7640
- Invited Session
- FreeForm and SMO
- Double Patterning I
- Double Patterning II
- Computational Lithography
- Polarization
- Beyond 22 nm
- Tools and Process Resolution Extensions I
- Tools and Process Resolution Extensions II
- Mask, Layout, and OPC
- Modeling
- Source-Mask Optimization
- Tools
- Poster Session: Computational Lithography
- Poster Session: Double Patterning
- Poster Session: FreeForm and SMO
- Poster Session: Laser
- Poster Session: Lithography Optimization
- Poster Session: Mask Layout and OPC
- Poster Session: Materials
- Poster Session: Modeling
- Poster Session: Tools and Process Control
Front Matter: Volume 7640
Front Matter: Volume 7640
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7640, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Invited Session
Shaping the future of nanoelectronics beyond the Si roadmap with new materials and devices
Show abstract
The use of high mobility channel materials such as Ge and III/V compounds for CMOS applications is being explored.
The introduction of these new materials also opens the path towards the introduction of novel device structures which
can be used to lower the supply voltage and reduce the power consumption. The results illustrate the possibilities that are
created by the combination of new materials and devices to allow scaling of nanoelectronics beyond the Si roadmap.
FreeForm and SMO
Generation of arbitrary freeform source shapes using advanced illumination systems in high-NA immersion scanners
Show abstract
The application of customized and freeform illumination source shapes is a key enabler for continued shrink using
193 nm water based immersion lithography at the maximum possible NA of 1.35. In this paper we present the
capabilities of the DOE based Aerial XP illuminator and the new programmable FlexRay illuminator. Both of these
advanced illumination systems support the generation of such arbitrarily shaped illumination sources. We explain how
the different parts of the optical column interact in forming the source shape with which the reticle is illuminated.
Practical constraints of the systems do not limit the capabilities to utilize the benefit of freeform source shapes vs. classic
pupil shapes. Despite a different pupil forming mechanism in the two illuminator types, the resulting pupils are
compatible regarding lithographic imaging performance so that processes can be transferred between the two illuminator
types. Measured freeform sources can be characterized by applying a parametric fit model, to extract information for
optimum pupil setup, and by importing the measured source bitmap into an imaging simulator to directly evaluate its
impact on CD and overlay. We compare measured freeform sources from both illuminator types and demonstrate the
good matching between measured FlexRay and DOE based freeform source shapes.
Demonstrating the benefits of source-mask optimization and enabling technologies through experiment and simulations
Show abstract
In recent years the potential of Source-Mask Optimization (SMO) as an enabling technology for 22nm-and-beyond lithography
has been explored and documented in the literature.1-5 It has been shown that intensive optimization of the fundamental
degrees of freedom in the optical system allows for the creation of non-intuitive solutions in both the mask and the
source, which leads to improved lithographic performance. These efforts have driven the need for improved controllability
in illumination5-7 and have pushed the required optimization performance of mask design.8, 9 This paper will present recent
experimental evidence of the performance advantage gained by intensive optimization, and enabling technologies like pixelated
illumination. Controllable pixelated illumination opens up new regimes in control of proximity effects,1, 6, 7 and we
will show corresponding examples of improved through-pitch performance in 22nm Resolution Enhancement Technique
(RET). Simulation results will back-up the experimental results and detail the ability of SMO to drive exposure-count reduction,
as well as a reduction in process variation due to critical factors such as Line Edge Roughness (LER), Mask Error
Enhancement Factor (MEEF), and the Electromagnetic Field (EMF) effect. The benefits of running intensive optimization
with both source and mask variables jointly has been previously discussed.1-3 This paper will build on these results by
demonstrating large-scale jointly-optimized source/mask solutions and their impact on design-rule enumerated designs.
Tolerancing analysis of customized illumination for practical applications of source and mask optimization
Show abstract
Due to the extremely small process window in the 32nm feature generation and beyond, it is necessary to implement
active techniques that can expand the process window and robustness of the imaging against various kinds of imaging
parameters. Source & Mask Optimization (SMO) 1 is a promising candidate for such techniques.
Although many applications of SMO are expected, tolerancing and specifications for aggressively customized
illuminators have not been discussed yet. In this paper we are going to study tolerancing of a freeform pupilgram
which is a solution of SMO. We propose Zernike intensity/distortion modulation method to express pupilgram errors.
This method may be effective for tolerancing analysis and defining the specifications for freeform illumination.
Furthermore, this method is can be applied to OPE matching of free form illumination source.
Freeform illumination sources: an experimental study of source-mask optimization for 22-nm SRAM cells
Show abstract
The use of customized illumination modes is part of the pursuit to stretch the applicability of immersion ArF lithography.
Indeed, a specific illumination source shape that is optimized for a particular design leads to enhanced imaging results.
Recently, freeform illumination has become available through pixelated DOEs or through FlexRayTM, ASML's
programmable illuminator system, allowing for virtually unconstrained intensity distribution within the source pupil.
In this paper, the benefit of freeform over traditional illumination is evaluated, by applying source mask co-optimization
(SMO) for an aggressive use case, and wafer-based verification. For a 22 nm node SRAM of 0.099 μm² and 0.078 μm2
bit cell area, the patterning of the full contact and metal layer into a hard mask is demonstrated with the application of
SMO and freeform illumination. In this work, both pixelated DOEs and FlexRay are applied. Additionally, the match
between the latter two is confirmed on wafer, in terms of CD and process window.
Double Patterning I
Evaluation of double-patterning techniques for advanced logic nodes
Show abstract
The development of Double-Patterning (DP) techniques continues to push forward aiming to extend the immersion
based lithography below 36 nm half pitch. There are widespread efforts to make DP viable for further scaling of
semiconductor devices. We have developed Develop/Etch/Develop/Etch (DE2) and Double-Expose-Track-Optimized
(DETO) techniques for producing pitch-split patterns capable of supporting semiconductor devices for the 16 nm and 11
nm nodes. The IBM Alliance has established a DETO baseline, in collaboration with ASML, TEL, CNSE, and KLATencor,
to evaluate the manufacturability of DETO by using commercially available resist systems. Presented in this
paper are the long-term performance results of these systems relevant to defectivity, overlay, and CD uniformity.
Actual performance data analysis of overlay, focus, and dose control of an immersion scanner for double patterning
Show abstract
Double patterning requires extremely high accuracy in overlay and high uniformity in CD control. For the 32 nm half
pitch, the CDU budget requires less than 2 nm overlay and less than 2 nm CD uniformity for the exposure tool. To meet
these requirements, Nikon has developed the NSR-S620D. It includes a new encoder metrology system for precise stage
position measurement. The encoder system provides better repeatability by using a short range optical path. For CD
uniformity control, various factors such as focus control, stage control, and dose control affect the results. Focus
uniformity is evaluated using the phase shift focus monitoring method. The function of "CDU Master" provides dose and
focus correction across the exposure slit, along the scan direction, and across the wafer. Stage synchronization variability
will also influence CD control. In this paper, we will show the actual results and analysis of the overall performance of
S620D, including the exposed result of pitch splitting double patterning. S620D has sufficient performance for the 32 nm
half pitch double patterning generation and shows potential for double patterning at the 22 nm half pitch node.
Modeling of double patterning interactions in litho-cure-litho-etch (LCLE) processes
Show abstract
This paper uses advanced modeling techniques to explore interactions between the two lithography processes in a lithocure-
etch process and to qualify their impact on the final resist profiles and process performance. Specifically, wafer
topography effects due to different optical properties of involved photoresist materials, linewidth variations in the second
lithography step due to partial deprotection of imperfectly cured resist, and acid/quencher diffusion effects between
resist materials are investigated. The paper highlights the results of the simulation work package of the European MD3
project.
Litho and patterning challenges for memory and logic applications at the 22-nm node
Show abstract
In this paper we look into the litho and patterning challenges at the 22nm node. These challenges are different for
memory and logic applications driven by the difference in device layout. In the case of memory, very small pitches and
CDs have to be printed, close to the optical diffraction limit (k1) and resist resolution capability. For random logic
applications e.g. the printing of SRAM, real pitch splitting techniques have to be applied for the first time at the 22nm
node due to the aggressive dimensions of extreme small and compact area and pitch of SRAM bitcell. Common
challenges are found for periphery of memory and random logic SRAM cells: here the Best Focus difference per feature
type, limits the Usable Depth of Focus.
Comparative study of resolution limits for double patterning and EUV processes for the 32nm contact hole case
Show abstract
Currently, EUV and double patterning (DP) are competing technologies for the 22nm hp node. The goal of this paper is
to perform a case study and explore resolution limits on a 32nm contact hole array. In order to investigate the resolution
limit for a DP process quantitatively, considering the substrate topography structure is crucial. We applied wafertopography/
lithography simulation to study the relevant effects in detail. To perform a comparative study between ArF
DP and EUV lithography we first analyzed the resolution limit for DP process. We investigated the performance of a
LDLD and a LFLE (litho-freeze-litho-etch) process by decreasing pitch until the resolution limit was reached. The
possible minimum x- pitch (with y-parallel line, first mask) is 85nm, the minimum y-pitch generated for the second litho
step with x-parallel lines is 90nm. This x-y anisotropic phenomenon is caused by the second litho step, where oblique
incident light propagating through space regions contributes to total image. The bulk image distribution is sensitive to
the material in the spacer region, therefore further process optimization is possible by tuning material properties.
Alternatively the fabrication of 32nm size contact holes with EUV lithography was simulated. Pattern shift due to
shadowing, aberrations and flare effects have been considered. A pitch of 64nm (1:1) can be realized at low flare levels,
but corrections for shadowing and flare are essential. Based on this quantification the gap of possible minimum pitch
between DP and EUV are discussed. Furthermore relation between DP topography effect and SMO are discussed.
Double Patterning II
Advances in dual-tone development for pitch frequency doubling
Show abstract
Dual-tone development (DTD) has been previously proposed as a potential cost-effective double patterning technique1.
DTD was reported as early as in the late 1990's2. The basic principle of dual-tone imaging involves processing exposed
resist latent images in both positive tone (aqueous base) and negative tone (organic solvent) developers. Conceptually,
DTD has attractive cost benefits since it enables pitch doubling without the need for multiple etch steps of patterned
resist layers. While the concept for DTD technique is simple to understand, there are many challenges that must be
overcome and understood in order to make it a manufacturing solution.
Previous work by the authors demonstrated feasibility of DTD imaging for 50nm half-pitch features at 0.80NA (k1 =
0.21) and discussed challenges lying ahead for printing sub-40nm half-pitch features with DTD. While previous
experimental results suggested that clever processing on the wafer track can be used to enable DTD beyond 50nm halfpitch,
it also suggest that identifying suitable resist materials or chemistries is essential for achieving successful imaging
results with novel resist processing methods on the wafer track. In this work, we present recent advances in the search
for resist materials that work in conjunction with novel resist processing methods on the wafer track to enable DTD.
Recent experimental results with new resist chemistries, specifically designed for DTD, are presented in this work. We
also present simulation studies that help and support identifying resist properties that could enable DTD imaging, which
ultimately lead to producing viable DTD resist materials.
Spacer defined double patterning for sub-72 nm pitch logic technology
Show abstract
In order to extend the optical lithography into sub-72 nm pitch regime, spacer defined double
patterning as a self-aligning process option was investigated. In the sidewall defined spacer process, spacer
material was deposited directly on the resist to achieve process simplification and cost effectiveness. For
the spacer defined double patterning, core mandrel CD uniformity is proven to be a main contributor to
pitch-walking and defined a new lithographic process window. Here, the aerial image log-slope is shown to
be a measurable predictor of CD uniformity and sidewall angle of the resist pattern. Through resist
screening and illumination optimization, resist core-mandrel of 2.5 nm CD uniformity across a focus range
more than 200 nm with ± 3.5 % exposure latitude was developed having sidewall control close to the
normal. Finally etch revealed that pitch-walking post pitch split can be suppressed below 2 nm within ± 2.5
% exposure latitude.
The impact of optical non-idealities in litho-litho-etch processing
Show abstract
Experimental work reveals that a thermal cure freeze process can alter the refractive index of a 1st
pass LLE resist designed for that purpose. Although negligible change in the real index (n) is observed at the
actinic wavelength, a 20% increase in the imaginary index (k) occurred. It is also experimentally determined
that a second pass resist coated over a frozen first layer, may have a planar or non-planar surface, depending
upon its' formulation.
Simulation studies show that a non-planarizing 2nd resist will exhibit lensing effects which result in
the 2nd pass resist feature showing sensitivity to the CD and profile of the embedded resist features. Other
simulations suggest that both non-planar 2nd resist surfaces and mismatching resist n & k values can have a
negative impact on the alignment sensitivity of a LLE double patterning process.
Double patterning lithography study with high overlay accuracy
Show abstract
Double patterning (DP) has become the most likely candidate to extend immersion lithography to the 32 nm node and
beyond. This paper focuses on experimental results of 32nm half pitch patterning using NSR-S620D, the latest Nikon
ArF immersion scanner. A litho-freeze-litho (LFL) process was employed for this experiment. Experimental results of
line CDU, space CDU, and overlay accuracy are presented. Finally, a budget for pitch splitting DP at the 22 nm half
pitch is presented.
Litho-process-litho for 2D 32nm hp Logic and DRAM double patterning
Show abstract
Over the last couple of years a lot of attention has gone to the development of new Litho-Process-Litho-Etch (LPLE)
double patterning process alternatives to Litho-Etch-Litho-Etch (LELE) or Spacer-Defined Double Patterning
(SDDP)[3,5,6]. Much progress has been made on the material side to improve the resolution of these processes and
imaging down to 26nm and even 22 nm 1:1 Lines/Spaces has been demonstrated[1,2,13]. This shows that from a resolution
point of view these processes can bridge the gap between ArF immersion single patterning and EUV lithography. These
results at small pitches are typically obtained using dipole illumination making them only useful for one pitch-one
orientation. Applying the combination of double patterning and dipole illumination is thus limited to regular line/space
gratings. For this paper, the patterning of more random 2D and through pitch designs is investigated using the double
patterning LPL alternatives for the POLY layer in combination with annular illumination. Fundamental behaviors of the
freezing schemes that affect the patterning performance for logic applications are discussed.
Modeling of exploration of reversible contrast enhancement layers for double exposure lithography
Show abstract
This paper discusses the modeling of reversible contrast enhancement layers (RCEL) for advanced optical lithography.
An efficient implementation of the Waveguide method is employed to investigate the process capability of RCEL and to
identify the most appropriate material and exposure parameters. It is demonstrated that the consideration of near field
diffraction effects and of bleaching dynamics is important to achieve correct results. A large refractive index of the resist
and the RCEL improves the achievable lithographic performance. It is shown that RCEL layers can be used to enhance
the performance of a NA=0.6 scanner to create a high contrast images with a pitch of 80nm.
Computational Lithography
Improving aberration control with application specific optimization using computational lithography
Show abstract
As the industry drives to lower k1 imaging we commonly accept the use of higher NA imaging and advanced
illumination conditions. The advent of this technology shift has given rise to very exotic pupil spread functions that
have some areas of high thermal energy density creating new modeling and control challenges. Modern scanners are
equipped with advanced lens manipulators that introduce controlled adjustments of the lens elements to counteract the
lens aberrations existing in the system. However, there are some specific non-correctable aberration modes that are
detrimental to important structures. In this paper, we introduce a methodology for minimizing the impact of aberrations
for specific designs at hand. We employ computational lithography to analyze the design being imaged, and then devise
a lens manipulator control scheme aimed at optimizing the aberration level for the specific design. The optimization
scheme does not minimize the overall aberration, but directs the aberration control to optimize the imaging performance,
such as CD control or process window, for the target design. Through computational lithography, we can identify the
aberration modes that are most detrimental to the design, and also correlations between imaging responses of
independent aberration modes. Then an optimization algorithm is applied to determine how to use the lens manipulators
to drive the aberrations modes to levels that are best for the specified imaging performance metric achievable with the
tool. We show an example where this method is applied to an aggressive memory device imaged with an advanced ArF
scanner. We demonstrate with both simulation and experimental data that this application specific tool optimization
successfully compensated for the thermal induced aberrations dynamically, improving the imaging performance
consistently through the lot.
Evaluation of lithographic benefits of using ILT techniques for 22nm-node
Show abstract
As increasing complexity of design and scaling continue to push lithographic imaging to its k1 limit, lithographers
have been developing computational lithography solutions to extend 193nm immersion lithography to the 22nm
technology node. In our paper, we investigate the beneficial source or mask solutions with respect to pattern fidelity
and process variation (PV) band performances for 1D through pitch patterns, SRAM and Random Logic Standard
Cells. The performances of two different computational lithography solutions, idealized un-constrained ILT mask
and manhattanized mask rule constrain (MRC) compliant mask, are compared. Additionally performance benefits
for process-window aware hybrid assist feature (AF) are gauged against traditional rule-based AF. The results of
this study will demonstrate the lithographic performance contribution that can be obtained from these mask
optimization techniques in addition to what source optimization can achieve.
A computational method for optimal application specific lens control in microlithography
Show abstract
Application specific aberration as a result of localized heating of lens elements during exposure has become more
significant in recent years due to increasing low k1 applications. Modern scanners are equipped with sophisticated lens
manipulators that are optimized and controlled by scanner software in real time to reduce this aberration. Advanced lens
control options can even optimize lens manipulators to achieve better process window and overlay performance for a
given application. This is accomplished by including litho metrics as part of the lens optimization process. Litho metrics
refer to any lithographic properties of interest (i.e., CD variation, image shift, etc...) that are sensitive to lens aberrations.
But, there are challenges that prevent effective use of litho metrics in practice. There are often a large number of critical
device features that need monitoring and the associated litho metrics (e.g., CD) generally show strong non-linear
response to Zernikes. These issues greatly complicate the lens control algorithm, making real-time lens optimization
difficult. We have developed a computational method to address these issues. It transforms the complex physical litho
metrics into a compact set of linearized "virtual" litho metrics, ranked by their importance to process window. These
new litho metrics can be readily used by the existing scanner software for lens optimization. Both simulations and
experiments showed that the litho metrics generated by this method improved aberration control.
Aerial image calculation by eigenvalues and eigenfunctions of a matrix that includes source, pupil, and mask
Show abstract
Partially coherent imaging is formulated using two positive semi-definite matrices that include the mask as
well as the source and pupil. One matrix E is obtained by shifting the pupil function as in Hopkins
transmission cross coefficient (TCC) approach while the other matrix Z is obtained by shifting the mask
diffraction. Although the aerial images obtained by the matrices are identical, it is shown rank(Z) ≤ rank(E)
= N, where N is the number of point sources in the illumination. Therefore, less than N FFTs are required to
obtain the complete aerial image. Since the matrix Z describes the signal as partitioned into eigenfunctions
orthonormal in the pupil, its eigenvalues can be used to quantify the coherence through the von Neumann
entropy. The entropy shows the degree of coherence in the image, which is dependent on the source, mask
and pupil but independent of aberration.
Optimization from design rules, source and mask, to full chip with a single computational lithography framework: level-set-methods-based inverse lithography technology (ILT)
Show abstract
For semiconductor manufacturers moving toward advanced technology nodes -32nm, 22nm and below - lithography
presents a great challenge, because it is fundamentally constrained by basic principles of optical physics. Because no
major lithography hardware improvements are expected over the next couple years, Computational Lithography has been
recognized by the industry as the key technology needed to drive lithographic performance. This implies not only
simultaneous co-optimization of all the lithographic enhancement tricks that have been learned over the years, but that
they also be pushed to the limit by powerful computational techniques and systems. In this paper a single computational
lithography framework for design, mask, and source co-optimization will be explained in non-mathematical language. A
number of memory and logic device results at the 32nm node and below are presented to demonstrate the benefits of
Level-Set-Method-based ILT in applications covering design rule optimization, SMO, and full-chip correction.
Quadratic blur kernels for latent image formation modeling
Show abstract
A bilinear photoresist model is accurate, fast, and potentially reversible. Similar to other image-processing style (blur
kernel) models, this model represents a transformation of an aerial image into a latent image. The key difference is the
explicit recognition of the non-linearity of the process while retaining common signal processing architecture. By
applying a Volterra series expansion to the reaction-diffusion functional, a high-accuracy representation of the process is
obtained. Several methods for identifying the double-impulse response of the quadratic term of the series are discussed.
Characterization is carried out based on the bi-harmonic signal sampling method of the Bilinear Transfer Function, the
Fourier transform of the double-impulse spread function. Several photoresist systems are characterized, and strong
quadratic behavior is observed for many. The resulting estimated BTF are presented, and their differences are discussed.
Polarization
In-situ Mueller matrix polarimetry of projection lenses for 193-nm lithography
Show abstract
For immersion lithography with aggressive polarization illumination settings, it is important to newly construct two
systems for diagnosing lithography tools; Stokes polarimetry of illumination and Mueller matrix polarimetry of
projection lenses. At the SPIE conference on Optical Microlithography XXI in 2008, the authors had already reported on
the former Stokes polarimetry. True polarization states of several illumination settings emerged. On the other hand, the
latter Mueller matrix polarimetry is thought more complicated than the Stokes polarimetry. Therefore, the Mueller
matrix polarimetry is reported separating into two papers. A theoretical approach to realizing the polarimetry has
reported at the SPIE conference on Lithography Asia 2009.
The test mask for the Mueller matrix polarimetry also comprises thin-plate polarizers and wide-view-angle quarter-waveplates,
both which are developed by collaboration with Kogakugiken Corporation in Japan. Mueller matrices of the
sample projecting optics are reconstructed by sixteen measurements of Stokes parameters of a light ray that reaches the
wafer plane though the test mask and the projecting optics. The Stokes parameters are measured with a polarization
measurement system already equipped on a side stage lying at the wafer plane. It took about seven hours to capture all
the images at five image heights within the static exposure field. Stokes parameters are automatically calculated from the
images and outputted from the lithography tools as a text file, and Mueller matrices are calculated by homebuilt software
in a short time. All the images were captured under the identical illumination condition that the tool manufacturer calls
"un-polarization".
Experimental result of polarization characteristics separation method
Show abstract
The use of high lens numerical aperture for improving the resolution of a lithographic lens requires a high incident angle
of exposure light in resist, which induces vectorial effects. As a result, high NA lithography has become more sensitive
to vectorial effects, and a vectorial fingerprint with higher accuracy has become necessary for effective image forming
simulation. We successfully obtained true polarization characteristics of single optics by separating the effect of
measurement optics, arranged serially, in the measurement optical path. Accuracy of the separated polarization
characteristics of two test birefringent optics on a testbench for principle verification was calculated to be better than
0.01 nm of OPE simulation error.
Beyond 22 nm
Implementing and validating double patterning in 22-nm to 16-nm product design and patterning flows
Show abstract
In double-patterning technology (DPT), we study the complex interactions of layout creation, physical design and design
rule checking flows for the 22nm and 16nm device nodes. Decomposition includes the cutting (splitting) of original
design-intent features into new overlapping polygons where required; and the coloring of all the resulting polygons into
two mask layouts. We discuss the advantages of geometric distribution for polygon operations with the limited range of
influence. Further, we find that even the naturally global coloring step can be handled in a geometrically local manner.
We analyze and compare the latest methods for designing, processing and verifying DPT methods including the 22nm
and 16nm nodes.
Comparative study of line width roughness (LWR) in next-generation lithography (NGL) processes
Show abstract
In this paper, we conduct a comprehensive comparative study of next-generation lithography (NGL) processes
in terms of their line width roughness (LWR) performance. We investigate mainstream lithography options
such as double patterning lithography (DPL), self-aligned double patterning (SADP), and extreme ultra-violet
(EUV), as well as alternatives such as directed self-assembly (DSA) and nano-imprint lithography (NIL). Given
the distinctly different processing steps, LWR arises from different sources for these patterning methods, and a
unified, universally applicable set of metrics must be chosen for useful comparisons. For each NGL, we evaluate
the LWR performance in terms of three descriptors, namely, the variation in RMS amplitude (σ), correlation
length (see manuscript) and the roughness exponent (α).
The correlation length (which indicates the distance along the edge beyond which any two linewidth measurements
can be considered independent) for NGL processes is found to range from 8 to 24 nm. It has been
observed that LWR decreases when transferred from resist into the final substrate and all NGL technology options
produce < 5% final LWR. We also compare our results with 2008 ITRS roadmap. Additionally, for the
first time, spatial frequency transfer characteristics for DSA and SADP are being reported. Based on our study,
the roughness exponent (which corresponds to local smoothness) is found to range from ~0.75-0.98; it is close to
being ideal (α = 1) for DSA. Lastly using EUV as an example, we show the importance of process optimization
as these technologies mature.
Tools and Process Resolution Extensions I
Toward perfect on-wafer pattern placement: stitched overlay exposure tool characterization
Show abstract
Continued lithographic pattern density scaling depends on aggressive overlay error reduction.1,2 Double patterning
processes planned for the 22nm node require overlay tolerances below 5 nm; at which point even sub-nanometer
contributions must be considered. In this paper we highlight the need to characterize and control the single-layer
matching among the three pattern placement mechanisms intrinsic to step&scan exposure - optical imaging, mask-to-
wafer scanning, and field-to-field stepping. Without stable and near-perfect pattern placement on each layer,
nanometer-scale layer-to-layer overlay tolerance is not likely to be achieved. Our approach to understanding onwafer
pattern placement is based on the well-known technique of stitched field overlay. We analyze dense
sampling around the field perimeter to partition the systematic contributors to pattern placement error on
representative dry and immersion exposure tools.
Impact of scanner signatures on optical proximity correction
Show abstract
Low pass filtering of mask diffraction orders, in the projection tools used in microelectronics
industry, leads to a range of optical proximity effects, OPEs, impacting integrated circuit pattern
images. These predictable OPEs can be corrected with various, model-based optical proximity
correction methodologies, OPCs , the success of which strongly depends on the completeness of
the imaging models they use.
The image formation in scanners is driven by the illuminator settings and the projection lens
NA, and modified by the scanner engineering impacts due to: 1) the illuminator signature, i.e. the
distributions of illuminator field amplitude and phase, 2) the projection lens signatures
representing projection lens aberration residue and the flare, and 3) the reticle and the wafer scan
synchronization signatures. For 4x nm integrated circuits, these scanner impacts modify the
critical dimensions of the pattern images at the level comparable to the required image tolerances.
Therefore, to reach the required accuracy, the OPC models have to imbed the scanner illuminator,
projection lens, and synchronization signatures.
To study their effects on imaging, we set up imaging models without and with scanner
signatures, and we used them to predict OPEs and to conduct the OPC of a poly gate level of 4x
nm flash memory. This report presents analysis of the scanner signature impacts on OPEs and
OPCs of critical patterns in the flash memory gate levels.
Overlay characterization and matching of immersion photoclusters
Show abstract
Many factors are driving a significant tightening of the overlay budget for advanced technology nodes, e.g.
6nm [mean + 3σ] for 22nm node. Exposure tools will be challenged to support this goal, even with tool
dedication. However, tool dedication has adverse impact on cycle time reduction, line productivity and cost
issues. There is a strong desire to have tool to tool (and chuck to chuck) matching performance, which supports
the tight overlay budgets without tool dedication. In this paper we report improvements in overlay metrology
test methods and analysis methods which support the needed exposure tool overlay capability.
Topcoat-less resist approach for high volume production and yield enhancement of immersion lithography
Show abstract
Double patterning (DP) is the first candidate for extension of ArF immersion lithography, and topcoat-less (TC-less)
process is an attractive process candidate compared to a topcoat process because it can make DP process simpler and
reduce the chip manufacturing cost. To make the DP process viable, TC-less process performance including defectivity,
auto focus (AF) and overlay performance must be validated. Nikon's latest volume production immersion lithography
tool (S620D), was used for TC-less process evaluation. While S620D shows good defectivity results with both topcoat
and TC-less process at 700mm/s scan speed, TC-less process showed slight improvement in defectivity compared to
topcoat process. One reason being that TC-less process can suppress topcoat originated defect such as topcoat blister.
The second reason is that TC-less resist can attain higher hydrophobicity than topcoat. Higher hydrophobicity is
advantageous for high speed scanning because of stable movement of water meniscus, resulting in better defectivity
performance. Defectivity results showed clear correlation to dynamic receding contact angle (D-RCA).
Blob defect reduction is one of the challenges with TC-less resist process, because hydrophobic surface repels rinse
water applied during development rinse process hence generating blob defect. However, the recent material
improvements of TC-less resist have overcome this challenge and showed excellent blob defect performance. The
hydrophobicity control during development process is the key factor in defect reduction.
Wafer edge process is also very important for immersion lithography. The preferable wafer edge treatment for both TCless
and topcoat process is to maintain uniform hydrophobicity over the entire wafer including wafer edge. While topcoat
can be removed perfectly by development, unexposed TC-less resist remains on the wafer edge. WEE (wafer edge
exposure) process can remove the excess resist after exposure, it's effectiveness was confirmed through experimental
results.
AF and overlay repeatability was evaluated on both topcoat and TC-less process; similar and sufficient performance was
obtained on both processes. Based on cost of ownership calculations it is believed a 30% material cost and 10% track
hardware cost reduction is feasible.
These evaluations provide convincing evidence that TC-less process is ready for 32nm generation and beyond.
Analysis of the impact of pupil shape variation by pupil fit modeling
Show abstract
As K1 factor for mass-production of memory devices has been decreased to almost its theoretical limit, the process
window of lithography is getting much smaller and the production yield has become more sensitive to even small
variations of the process in lithography. So it is necessary to control the process variations more tightly than ever. In
mass-production, it is very hard to extend the production capacity if the tool-to-tool variation of scanners and/or scanner
stability through time is not minimized. One of the most critical sources of variation is the illumination pupil. So it is
critical to qualify the shape of pupils in scanners to control tool-to-tool variations.
Traditionally, the pupil shape has been analyzed by using classical pupil parameters to define pupil shape, but these
basic parameters, sometimes, cannot distinguish the tool-to-tool variations. It has been found that the pupil shape can be
changed by illumination misalignment or damages in optics and theses changes can have a great effect on critical
dimension (CD), pattern profile or OPC accuracy. These imaging effects are not captured by the basic pupil parameters.
The correlation between CD and pupil parameters will become even more difficult with the introduction of more
complex (freeform) illumination pupils.
In this paper, illumination pupils were analyzed using a more sophisticated parametric pupil description (Pupil Fit
Model, PFM). And the impact of pupil shape variations on CD for critical features is investigated. The tool-to-tool
mismatching in gate layer of 4X memory device was demonstrated for an example. Also, we interpreted which
parameter is most sensitive to CD for different applications. It was found that the more sophisticated parametric pupil
description is much better compared to the traditional way of pupil control. However, our examples also show that the
tool-to-tool pupil variation and pupil variation through time of a scanner can not be adequately monitored by pupil
parameters only, The best pupil control strategy is a combination of pupil parameters and simulated CD using measured
illumination pupils or modeled pupils.
Tools and Process Resolution Extensions II
Predicting and reducing substrate induced focus error
Show abstract
The ever shrinking lithography process window requires us to maximize our process window and minimize tool-induced
process variation, and also to quantify the disturbances to an imaging process caused upstream of the imaging step.
Relevant factors include across-wafer and wafer-to-wafer film thickness variation, wafer flatness, wafer edge effects,
and design-induced topography. We quantify these effects and their interactions, and present efforts to reduce their harm
to the imaging process. We also present our effort to predict design-induced focus error hot spots at the edge of our
process window. The collaborative effort is geared towards enabling a constructive discussion with our design team, thus
allowing us to prevent or mitigate focus error hot spots upstream of the imaging process.
Lithographic scanner stability improvements through advanced metrology and control
Show abstract
Holistic lithography is needed to cope with decreasing process windows and is built on three pillars: Scanner Tuning,
Computational Lithography and Metrology & Control. The relative importance of stability to the overall manufacturing
process latitude increases. Overlay and focus stability control applications are important elements in improving stability
of the lithographic process. The control applications rely on advanced control algorithms and fast and precise metrology.
To address the metrology needs at the 32 nm node and beyond, an optical scatterometry tool was developed capable of
measuring CD, focus-dose as well as overlay. Besides stability and control of lithographic performance also scanner
matching is a critical enabler where application development and metrology performance are key. In this paper we
discuss the design and performance of the metrology tool, the focus and overlay control application and the application
of scatterometry in scanner matching solutions.
Printing the metal and contact layers for the 32- and 22-nm node: comparing positive and negative tone development process
Show abstract
A strong demand exists for techniques that can further extend the application of ArF immersion lithography. Besides
techniques like litho-friendly design, dual exposure or patterning schemes, customized illumination modes, also
alternative processing schemes are viable candidates to reach this goal. One of the most promising alternative process
flows uses image reversal by means of a negative tone development (NTD) step with a FUJIFILM solvent-based
developer. Traditionally, the printing of contacts and trenches is done by using a dark field mask in combination with
positive tone resist and positive tone development. With NTD, the same features can be printed in positive resist using a
light field mask, and consequently with a much better image contrast.
In this paper, we present an overview of applications for the NTD technique, both for trench and contact patterning,
comparing the NTD performance to that of the traditional positive tone development (PTD). This experimental work was
performed on an ASML Twinscan XT:1900i scanner at 1.35 NA, and targets the contact/metal layers of the 32 & 22 nm
node. For contact hole printing, we consider both single and dual exposure schemes for regular arrays and 2D patterns.
For trench printing, we compare the NTD and PTD performance for one-dimensional patterns, line ends and twodimensional
structures. We also assess the etch capability and CDU performance of the NTD process.
This experimental study proves the added value of the NTD scheme. For contacts and trenches, it allows achieving a
broader pitch range and/or smaller litho targets, which makes this process flow attractive for the most advanced
lithography applications, including double patterning.
The impacts of scanner modeling parameters for OPC model of sub-40nm memory device
Show abstract
It is necessary to apply extreme illumination condition on real device as minimum feature size of the device
shrinks. As k1 decrease, ultra extreme illumination has to be used. However, in case of using this illumination, CD and
process windows dramatically fluctuate as pupil shapes slightly changes. For past several years, Pupil Fit Modeling
(PFM) was developed in order to analyze pupil shape parameters which are independent from each others. The first
object in this work is to distinguish pupil shape of different scanner by separating more parameters. According to pupil
parameter analysis, the major factors of CD or process window difference between two scanner systems obviously
appear. Due to correlation between pupil parameter and scanner knob, pupil parameter analysis would be clearly
identified which scanner knob should be compensated. The second object is to define specification of each parameter by
using analysis of CD budget for each pupil parameters. Using periodic monitoring of pupil parameter which is controlled
by previous specification, scanner system in product lines can be maintained at ideal state. Additionally, OPC model
accuracy enhancement should be obtained by using highly accurate fitted pupil model. Recently, other application of
pupil model is reported for improvement of OPC and model based verification model accuracy. Such as modeling using
average optics and hot spot detection of scanner specific model are easily adopted by using pupil fit model. Therefore,
applications of pupil fit parameter for process model are very useful for improvement of model accuracy.
In our study, the quantity of model accuracy enhancement using PFM is investigated and analyzed. OPC and
hotspot point detection capability results with pupil fit model would be shown. Also, in this paper, trends of CD and
process window for each scanner parameter are evaluated by using pupil fit model. As of results, we were able to find
which pupil parameter has influence in critical layer CD and application of this result resulted in better accuracy in
detecting hotspot for model based verification.
Simulation-based pattern matching using scanner metrology and design data to reduce reliance on CD metrology
Show abstract
Scanner matching based on wafer data has proven to be successful in the past years, but its adoption into production has
been hampered by the significant time and cost overhead involved in obtaining large amounts of statistically precise
wafer CD data. In this work, we explore the possibility of optical model based scanner matching that maximizes the use
of scanner metrology and design data and minimizes the reliance on wafer CD metrology.
A case study was conducted to match an ASML ArF immersion scanner to an ArF dry scanner for a 6Xnm technology
node. We used the traditional, resist model based matching method calibrated with extensive wafer CD measurements
and derived a baseline scanner manipulator adjustment recipe. We then compared this baseline scanner-matching recipe
to two other recipes that were obtained from the new, optical model based matching method. In the following sections,
we describe the implementation of both methods, provide their predicted and actual improvements after matching, and
compare the ratio of performance to the workload of the methods. The paper concludes with a set of recommendations
on the relative merits of each method for a variety of use cases.
The GridMapper challenge: how to integrate into manufacturing for reduced overlay error
Show abstract
More sophisticated corrections of overlay error are required because of the challenge caused by technology
scaling faster than fundamental tool improvements. Starting at the 45 nm node, the gap between the matchedmachine-
overlay error (MMO) and technology requirement has decreased to the point where additional
overlay correction methods are needed. This paper focuses on the steps we have taken to enable
GridMapperTM, which is offered by ASML, as a method to reduce overlay error.
The paper reviews the basic challenges of overlay error and previous standard correction practices. It then
describes implementation of GridMapper into IBM's 300 mm fabrication facility. This paper also describes
the challenges we faced and the improvements in overlay control observed with the use of this technique.
Specifically, this paper will illustrate several improvements:
1. Minimization of non-linear grid signature differences between tools
2. Optimization of overlay corrections across all fields
3. Decreased grid errors, even on levels not using GridMapper
4. Maintenance of the grid for the lifetime of a product
5. Effectiveness in manufacturing - cycle time, automated corrections for tool grid signature changes
and overlay performance similar to dedicated chuck performance
Simultaneous optimization of dose and focus controls in advanced ArF immersion scanners
Show abstract
We have developed a new scheme of process control combining a CD metrology system and an exposure tool. A new
model based on Neural Networks has been created in KLA-Tencor's "KT Analyzer" which calculates the dose and
focus errors simultaneously from CD parameters, such as mid CD and height information, measured by a scatterometry
(OCD) measurement tool. The accuracy of this new model was confirmed by experiment. Nikon's "CDU master" then
calculated the control parameters for dose and focus per each field from the dose and focus error data of a reference
wafer provided by KT Analyzer. Using the corrected parameters for dose and focus from CDU master, we exposed
wafers on an NSR-S610C (ArF immersion scanner), and measured the CDU on a KLA SCD100 (OCD tool). As a result,
we confirmed that CDU in the entire wafer can be improved more than 60% (from 3.36nm (3σ) to 1.28nm (3σ)).
Mask, Layout, and OPC
Novel fine-tuned model-based SRAF generation method using coherence map
Show abstract
We have developed the comprehensive sub-resolution assist features (SRAFs) generation method based upon the
modulation of the coherence map. The method has broken through the trade-off relation between processing time and
accuracy of the SRAF generation. We have applied this method to a real device layout and the average of Process
Variation band width (PV band width) has improved to 40% without any processing time loss.
Ultimately accurate SRAF replacement for practical phases using an adaptive search algorithm based on the optimal gradient method
Show abstract
SRAF (Sub Resolution Assist Feature) technique has been widely used for DOF enhancement. Below 40nm design
node, even in the case of using the SRAF technique, the resolution limit is approached due to the use of hyper NA
imaging or low k1 lithography conditions especially for the contact layer. As a result, complex layout patterns or random
patterns like logic data or intermediate pitch patterns become increasingly sensitive to photo-resist pattern fidelity. This
means that the need for more accurate resolution technique is increasing in order to cope with lithographic patterning
fidelity issues in low k1 lithography conditions. To face with these issues, new SRAF technique like model based SRAF
using an interference map or inverse lithography technique has been proposed. But these approaches don't have enough
assurance for accuracy or performance, because the ideal mask generated by these techniques is lost when switching to a
manufacturable mask with Manhattan structures. As a result it might be very hard to put these things into practice and
production flow.
In this paper, we propose the novel method for extremely accurate SRAF placement using an adaptive search algorithm.
In this method, the initial position of SRAF is generated by the traditional SRAF placement such as rule based SRAF,
and it is adjusted by adaptive algorithm using the evaluation of lithography simulation. This method has three advantages
which are preciseness, efficiency and industrial applicability. That is, firstly, the lithography simulation uses actual
computational model considering process window, thus our proposed method can precisely adjust the SRAF positions,
and consequently we can acquire the best SRAF positions. Secondly, because our adaptive algorithm is based on optimal
gradient method, which is very simple algorithm and rectilinear search, the SRAF positions can be adjusted with high
efficiency. Thirdly, our proposed method, which utilizes the traditional SRAF placement, is easy to be utilized in the
established workflow. These advantages make it possible to give the traditional SRAF placement a new breath of life for
low k1.
22nm logic lithography in the presence of local interconnect
Show abstract
The 22nm logic node is being approached from at least two different scaling paths. One approach "B" will use Gate and
1x Metal pitches of approximately 80nm, which, combined with the appropriate design style, may allow single exposure
to be used. The other combination under consideration "A" will have a Gate pitch of ~90nm and a 1x Metal pitch of
70nm. Even with immersion scanners, the Rayleigh k1 factor is below 0.32 for 90nm pitch and below the single exposure
resolution limits when the pitch is below 80nm.
Although highly regular gridded patterns help [1,2,3], one of the critical issues for 22nm patterning is Contact and Via
patterning. The lines / cuts approach works well for the poly and interconnect layers, but the "hole" layers have less
benefit from gridded designs and remain a big challenge for patterning.
One approach to reduce the lithography optimization problem is to reconsider the interconnection stack. The Contact
layer is complex because it is connecting two layers on the bottom - Active and Gate - to one layer on the top. Other
layers such as Via-1 only have one layer on the bottom.
A potential solution is a Local Interconnect layer. This layer could be formed as part of the salicide process module,
where a patterned etch would replace the blanket strip of un-reacted metal of the silicide layer. Local interconnect lines
would run parallel to the Gate electrodes, eliminating "wrong-way" lines in the Active layer. Depending on the final
pitch chosen, Local Interconnect could be single or double patterned, or could be done with a self-aligned process plus a
cut mask.
Example layouts of standard cells have shown a significant benefit with local interconnects. For example, the Contact
count is reduced by ~25%, and in many cases Via-1 and Metal-2 usage was eliminated.
The simplified Active pattern, along with reduced contact count and density, permit a different lithography optimization
for the cells designed with Local Interconnect. Metal-1 complexity was also reduced. Details of lithography optimization
results for critical layers, Active, Gate, Local Interconnect, Contact, and Metal-1 will be presented.
Lithography and layout co-optimization beyond conventional OPC concept
Show abstract
Instead of conventional SMO that iterates illumination source optimization and OPC, new optimization method is
introduced that optimizes illumination source and device layout simultaneously. In this method the layout is described by
a function of layout parameters that defines the layout characteristics and the layout parameters are combined with
source parameters, which forms a composite space of optimization. In this space the source and layout are optimized
simultaneously. This method can search the steepest slope to the solution in the space during optimization, which is
impossible for the conventional SMO. So it can reach the real solution with less probability of being trapped in local
solution. This technology is applied to some cases of lithography targets such as CD and DOF, and good results are
attained with very simple mask. It also works for diagonal patterns that OPC cannot handle easily. In addition more
complicated lithography target such as robustness against MSD of scanner stage vibration is addressed and the
optimization result is useful to resolve problems caused by fluctuation of manufacturing.
Mask enhancer technology for sub-100nm pitch random logic layout contact hole fabrication
Show abstract
We have proposed a new resolution enhancement technology using attenuated
mask with phase shifting aperture, named "Mask Enhancer", for random-logic contact
hole pattern printing. In this study, we apply Mask Enhancer on sub-100nm pitch contact
hole printing with 1.35NA ArF immersion lithography tool, and ensure that Mask
Enhancer can improve MEEF at resolution limit and DOF at semi-dense and isolated
pitch region. We demonstrate printing a fine 100nm pitch line of contacts and isolated
simultaneously with MEEF of less than 4 by using Mask Enhancer and prove that Mask
Enhancer is one of the most effective solutions for random logic layout contact hole
fabrication for 28nm node and below.
Suppressing ringing effects from very strong off-axis illumination with novel OPC approaches for low k1 lithography
Show abstract
With the delay in commercialization of EUV and the abandonment of high index immersion, Fabs are
trying to put half nodes into production by pushing the k1 factor of the existing scanner tool base as
low as possible. A main technique for lowering lithographic k1 factor is by moving to very strong offaxis
illumination (i.e., illumination with high outer sigma and a narrow range of illumination angles),
such as Quadrapole (e.g., C-Quad), custom or even dipole illumination schemes. OPC has generally
succeeded to date with either rules-based or simple model-based dissection together with target point
placement schemes. Very strong off-axis illumination, however, creates pronounced ringing effects
on 2D layout and this makes these simpler dissection techniques problematic. In particular, it is hard
to prevent overshoot of the contour around corners while simultaneously dampening out the ringing
further down the feature length. In principle, a sufficiently complex set of rules could be defined to
solve this issue, but in practice this starts to become un-manageable as the time needed to generate a
usable recipe becomes too long. Previous implementations of inverse lithography demonstrated that
good CD control is possible, but at the expense of the mask costs and other mask synthesis
complications/limitations. This paper first analyzes the phenomenon of ringing and the limitations
seen with existing simpler target placement techniques. Then, different methods of compensation are
discussed. Finally, some encouraging results are shown with new traditional and inverse
experimental techniques that the authors have investigated, some of which only demand incremental
changes to the existing OPC framework. The results show that new OPC techniques can be used to
enable successful use of very strong off-axis illumination conditions in many cases, to further reduce
lithographic k1 limits.
Modeling
Three-dimensional physical photoresist model calibration and profile-based pattern verification
Show abstract
In this paper, we report large scale three-dimensional photoresist model calibration and validation
results for critical layer models that span 32 nm, 28 nm and 22 nm technology nodes. Although
methods for calibrating physical photoresist models have been reported previously, we are unaware
of any that leverage data sets typically used for building empirical mask shape correction models. .
A method to calibrate and verify physical resist models that uses contour model calibration data sets
in conjuction with scanning electron microscope profiles and atomic force microscope profiles is
discussed. In addition, we explore ways in which three-dimensional physical resist models can be
used to complement and extend pattern hot-spot detection in a mask shape validation flow.
The feasibility of using image parameters for test pattern selection during OPC model calibration
Show abstract
Model based optical proximity correction (MB-OPC) is essential for the production of advanced integrated circuits
(ICs). Calibration of these OPC resist models uses empirical fitting of measured test pattern data. It seems logical
that to produce OPC models, acquiring more data will always improve the OPC model accuracy; on the other hand,
reducing metrology and model build time is also a critical and continually escalating requirement with the constant
increase in the complexity of the IC development process. A trade off must therefore be made to obtain adequate
number of data points that produce accurate OPC models without overloading the metrology tools and resources.
In this paper, we are examining the feasibility of using the image parameters (IPs) to select the test patterns. The
approach is to base our test pattern selection only on the IPs and verify that the resulting OPC model is accurate.
Another approach is to reduce the data gradually in different steps using IP considerations and see how the OPC
model performance changes. A third, compromise approach is to specify a test pattern set based on IPs and add to
that set few patterns based on different considerations. The three approaches and their results are presented in
details in this paper.
Optical proximity correction enhancement by using model based fragmentation approaches
Show abstract
As the industry progresses toward smaller patterning nodes with tighter CD error budgets and narrower process
windows, the ability to control pattern quality becomes a critical, yield-limiting factor. In addition, as the feature size of
design layouts continues to decrease at 32nm and below, optical proximity correction (OPC) technology becomes more
complex and more difficult. From a lithographic point of view, it is the most important that the patterns are printed as
designed. However, unfavorable localized CD variation can be induced by the lithography process, which will cause
catastrophic patterning failures (i.e. ripple effects, and severe necking or bridging phenomenon) through process
variation. It is becoming even more severe with strong off-axis illumination conditions and other resolution enhancement
techniques (RETs). Traditionally, it can be reduced by optimizing the rule based edge fragmentation in the OPC setup,
but this fragmentation optimization is very dependent upon the engineer's skill. Most fragmentation is based on a set of
simple rules, but those rules may not always be robust in every possible design shape.
In this paper, a model based approach for solving these imaging distortions has been tested as opposed to a previous
rule based one. The model based approach is automatic correction techniques for reducing complexity of the OPC recipe.
This comes in the form of automatically adjusting fragments lengths along with feedback values at every OPC iterations
for a better convergence. The stability and coverage for this model based approach has been tested throughout various
layout cases.
Automation of sample plan creation for process model calibration
Show abstract
The process of preparing a sample plan for optical and resist model calibration has always been tedious. Not only
because it is required to accurately represent full chip designs with countless combinations of widths, spaces and
environments, but also because of the constraints imposed by metrology which may result in limiting the number of
structures to be measured. Also, there are other limits on the types of these structures, and this is mainly due to the
accuracy variation across different types of geometries. For instance, pitch measurements are normally more accurate
than corner rounding. Thus, only certain geometrical shapes are mostly considered to create a sample plan. In addition,
the time factor is becoming very crucial as we migrate from a technology node to another due to the increase in the
number of development and production nodes, and the process is getting more complicated if process window aware
models are to be developed in a reasonable time frame, thus there is a need for reliable methods to choose sample plans
which also help reduce cycle time.
In this context, an automated flow is proposed for sample plan creation. Once the illumination and film stack are defined,
all the errors in the input data are fixed and sites are centered. Then, bad sites are excluded. Afterwards, the clean data
are reduced based on geometrical resemblance. Also, an editable database of measurement-reliable and critical structures
are provided, and their percentage in the final sample plan as well as the total number of 1D/2D samples can be
predefined. It has the advantage of eliminating manual selection or filtering techniques, and it provides powerful tools
for customizing the final plan, and the time needed to generate these plans is greatly reduced.
Source-Mask Optimization
SMO for 28-nm logic device and beyond: impact of source and mask complexity on lithography performance
Show abstract
This paper investigates the application of source-mask optimization (SMO) techniques for 28 nm logic device and
beyond. We systematically study the impact of source and mask complexity on lithography performance. For the source,
we compare SMO results for the new programmable illuminator (ASML's FlexRay) and standard diffractive optical
elements (DOEs). For the mask, we compare different mask-complexity SMO results by enforcing the sub-resolution
assist feature (SRAF or scattering bar) configuration to be either rectangular or freeform style while varying the mask
manufacturing rule check (MRC) criteria. As a lithography performance metric, we evaluate the process windows and
MEEF with different source and mask complexity through different k1 values. Mask manufacturability and mask writing
time are also examined. With the results, the cost effective approaches for logic device production are shown, based on
the balance between lithography performance and source/mask (OPC/SRAF) complexity.
Illumination optics for source-mask optimization
Show abstract
Source Mask Optimization (SMO) 1 is proposed and being developed for the 32 nm generation and beyond in order to
extend dose / focus margin by simultaneous optimization of the illuminator source shape and a customized mask. For
several years now, mask optimization techniques have been improving. At the same time, the flexibility of the
illuminator must also be improved, leading to more complex illumination shapes. As a result, pupil fill is moving from a
parametric model defined by sigma value, ratio, clocking angle, subtended angle and/or, pole balance, to a freeform
condition with gray scale defined by light intensity in the illuminator. We have evaluated an intelligent illuminator in
order to meet requirements of SMO. Then we have confirmed controllability of the pupilgram.
Considerations in source-mask optimization for logic applications
Show abstract
In the low k1 regime, optical lithography can be extended further to its limits by advanced computational lithography
technologies such as Source-Mask Optimization (SMO) without applying costly double patterning techniques. By cooptimizing
the source and mask together and utilizing new capabilities of the advanced source and mask manufacturing,
SMO promises to deliver the desired scaling with reasonable lithography performance. This paper analyzes the
important considerations when applying the SMO approach to global source optimization in random logic applications.
SMO needs to use realistic and practical cost functions and model the lithography process with accurate process data.
Through the concept of source point impact factor (SPIF), this study shows how optimization outputs depend on SMO
inputs, such as limiting patterns in the optimization. This paper also discusses the modeling requirements of lithography
processes in SMO, and it shows how resist blur affect optimization solutions. Using a logic test case as example, the
optimized pixelated source is compared with the non-optimized source and other optimized parametric sources in the
verification. These results demonstrate the importance of these considerations during optimization in achieving the best
possible SMO results which can be applied successfully to the targeted lithography process.
Challenges for low-k1 lithography in logic devices by source mask co-optimization
Show abstract
Through simulation and experiment, we evaluate the performance of process window improvement by source only
optimization, mask only optimization or source mask co-optimization. From the results, we demonstrate that SMO is the
most effective, and free-form source application is also effective. Additionally, it is found that SMO with calibrated
resist model is very predictable. We then show that SMO application provides reasonable process window for 28-nm
node and 22-nm node.
A GPU-based full-chip source-mask optimization solution
Show abstract
A simultaneous optimization of source and mask with full-chip capability is presented. To provide full-chip processing
capability, the solution is intentionally based on GPUs as well as CPUs and made scalable to large clusters while
maintaining convergence. The approach uses a proprietary search algorithm to converge to an optimal solution in the
sense of print quality maximization while obeying existing mask manufacturing, lithography equipment and process
technology constraints. The solution is based on a proprietary optimization function that is applicable to both binary and
phase shift masks.
Source mask optimization for advanced lithography nodes
Show abstract
Source mask optimization is becoming increasingly important for advanced lithography nodes.
In this paper, we present several source mask optimization flows, with increasing levels of
complexity. The first flow deals with parametric source shapes. Here, for every candidate
source, we start by placing model-based assist features using inverse mask technology (IMT).
We then perform a co-optimization of the main feature (for OPC) and assist feature (for
printability). Finally, we do a statistical analysis of several lithography process metrics to
determine the quality of the solution, which can be used as feedback to determine the next
candidate source. In the second flow, the parametric source is instead approximated by a pixel
based source inverter, providing a fast and efficient way of exploring the source solution space.
The final flow consists of pixilated source shapes realizable via DOEs or programmable
illumination.
Tools
Towards ultimate optical lithography with NXT:1950i dual stage immersion platform
Show abstract
Optical lithography, currently being used for 45-nm semiconductor devices, is expected to be extended further towards
the 32-nm and 22-nm node. A further increase of lens NA will not be possible but fortunately the shrink can be enabled
with new resolution enhancement methods like source mask optimization (SMO) and double patterning techniques
(DPT). These new applications lower the k1 dramatically and require very tight overlay control and CD control to be
successful. In addition, overall cost per wafer needs to be lowered to make the production of semiconductor devices
acceptable. For this ultimate era of optical lithography we have developed the next generation dual stage NXT:1950i
immersion platform. This system delivers wafer throughput of 175 wafers per hour together with an overlay of 2.5nm.
Several extensions are offered enabling 200 wafers per hour and improved imaging and on product overlay.
The high productivity is achieved using a dual wafer stage with planar motor that enables a high acceleration and high
scan speed. With the dual stage concept wafer metrology is performed in parallel with the wafer exposure. The free
moving planar stage has reduced overhead during chuck exchange which also improves litho tool productivity.
In general, overlay contributors are coming from the lithography system, the mask and the processing. Main contributors
for the scanner system are thermal wafer and stage control, lens aberration control, stage positioning and alignment. The
back-bone of the NXT:1950i enhanced overlay performance is the novel short beam fixed length encoder grid-plate
positioning system. By eliminating the variable length interferometer system used in the previous generation scanners the
sensitivity to thermal and flow disturbances are largely reduced. The alignment accuracy and the alignment sensitivity
for process layers are improved with the SMASH alignment sensor. A high number of alignment marker pairs can be
used without throughput loss, and furthermore the GridMapper functionality which is using the inter-die and intra-die
scanner capability can reduce overlay errors coming from mask and process without productivity impact.
In this paper we will present the main design features and discuss the system performance of the NXT:1950i system,
focusing on the improvements made in overlay and productivity. We will show data on imaging, overlay, focus and
productivity supporting the 3X-nm node and we will discuss next improvement steps towards the 2X-nm node.
Latest performance of immersion scanner S620D with the Streamlign platform for the double patterning generation
Show abstract
Currently, it is considered that one of the most favorable options for the 32 nm HP node is pitch-splitting double
patterning, which requires the lithography tool to achieve high productivity and high overlay accuracy simultaneously. In
the previous work [1], we described the concepts and the technical features of Nikon's immersion scanner based on our
newly developed platform, Streamlign, designed for 2nm overlay, 200wph throughput, and short setup time. In this
paper, we present the latest actual performance of S620D with the Streamlign platform.
Owing to the high repeatability of our new encoder metrology system, Bird's Eye Control, and Stream Alignment,
S620D achieves less than 2 nm overlay accuracy, less than 15nm focus accuracy, and successful 32 and 22 nm L/S pitchsplitting
double patterning exposures. Furthermore, the results at high scanning speed up to 700 mm/s are fine and we
have successfully demonstrated over 4,000 wpd throughput, which confirms the potential for high productivity. Nikon
has developed this Streamlign as an optimized long life platform based on the upgradable Modular2 structure for
upcoming generations. The performance of S620D indicates the possibility of immersion extension down through the 22
nm HP node and beyond.
Performance of FlexRay: a fully programmable illumination system for generation of freeform sources on high NA immersion systems
Show abstract
This paper describes the principle and performance of FlexRay, a fully programmable illuminator for high NA
immersion systems. Sources can be generated on demand, by manipulating an array of mirrors instead of the traditional
way of inserting optical elements and changing lens positions. On demand (freeform) source availability allows for
reduction in R&D cycle time and shrink in k1. Unlimited tuning allows for better machine to machine matching.
FlexRay has been integrated in a 1.35NA TWINSCAN exposure system. We will present data of FlexRay using
measured traditional and freeform illumination sources. In addition system performance qualification data on stability,
reproducibility and imaging will be shown. The benefit of FlexRay for SMO enabling shrink is demonstrated using an
SRAM example.
High reliability ArF light source for double patterning immersion lithography
Show abstract
Double patterning lithography places significant demands not only on the optical performance of the light source
(higher power, improved parametric stability), but also on high uptime in order to meet the higher throughput
requirements of the litho cell. In this paper, we will describe the challenges faced in delivering improved
performance while achieving better reliability and resultant uptime as embodied in the XLR 600ix light source from
Cymer, announced one year ago. Data from extended life testing at 90W operation will be shown to illustrate these
improvements.
Advanced imaging with 1.35 NA immersion systems for volume production
Show abstract
The semiconductor industry has adopted water-based immersion technology as the mainstream high-end litho enabler
for 5x-nm and 4x-nm devices. Exposure systems with a maximum lens NA of 1.35 have been used in volume
production since 2007, and today achieve production levels of more than 3400 exposed wafers per day. Meanwhile
production of memory devices is moving to 3x-nm and to enable 38-nm printing with single exposure, a 2nd generation
1.35-NA immersion system (XT:1950Hi) is being used. Further optical extensions towards 32-nm and below are
supported by a 3rd generation immersion tool (NXT:1950i).
This paper reviews the maturity of immersion technology by analyzing productivity, robust control of imaging, overlay
and defectivity performance using the mainstream ArF immersion production systems. We will present the latest results
and improvements on robust CD control of mainstream 4x-nm memory applications. Overlay performance, including
on-product overlay control is discussed. Immersion defect performance is optimized for several resist processes and
further reduced to ensure high yield chip production even when exposing more than 15 immersion layers.
Poster Session: Computational Lithography
The impact of resist model on mask 3D simulation accuracy beyond 40nm node memory patterns
Show abstract
Beyond 40nm lithography node, mask topograpy is important in litho process. The rigorous EMF simulation should
be applied but cost huge time. In this work, we compared experiment data with aerial images of thin and thick mask
models to find patterns which are sensitive to mask topological effects and need rigorous EMF simulations. Furthur more,
full physical and simplified lumped (LPM) resist models were calibrated for both 2D and 3D mask models. The accuracy
of CD prediction and run-time are listed to gauge the most efficient simulation. Although a full physical resist model
mimics the behavior of a resist material with rigor, the required iterative calculations can result in an excessive execution
time penalty, even when simulating a simple pattern. Simplified resist models provide a compromise between
computational speed and accuracy.
The most efficient simulation approach (i.e. accurate prediction of wafer results with minimum execution time) will
have an important position in mask 3D simulation.
Comparison of OPC models with and without 3D-mask effect
Show abstract
OPC models with and without thick mask effect (3D-mask effect) are compared in their prediction capabilities of actual
2D patterns. We give some examples in which thin-mask models fail to compensate the 3D-mask effect. The models
without 3D-mask effect show good model residual error, but fail to predict some critical CD tendencies. Rigorous
simulation predicts the observed CD tendencies, which confirms that the discrepancy really comes from 3D-mask effect.
Virtual fab flow for wafer topography aware OPC
Hans-Jürgen Stock,
Lars Bomholt,
Dietmar Krüger,
et al.
Show abstract
Small feature sizes down to the current 45 nm node and precision requirements of patterning in 193 nm
lithography as well as layers where the wafer stack does not allow any BARC require - not only correction of
optical proximity (OPC) effects originating from mask topography and imaging system, but also correction of
wafer topography proximity (WTPC) effects as well. In spite of wafer planarization process steps, wafer
topography (proximity) effects induced by different optical properties of the patterned materials start playing
a significant role, and correction techniques need to be applied in order to minimize the impact.
In this paper, we study a methodology to create fast models intended for effective use in OPC and WTPC
procedures. In order to be short we use the terms "OPCWTPC modeling" and "OPCWTPC models" through
the paper although it would be more correctly to take the terms "mask synthesis modeling" and "mask
synthesis models".
A comprehensive data set is required to build a reliable OPC model. We present a "virtual fab" concept using
extensive test pattern sets with both 1D and 2D structures to capture optical proximity effects as well as wafer
topography effects.
A rigorous lithography simulator taking into account exposure tool source maps, topographic mask effects as
well as wafer topography is used to generate virtual measurement data, which are used for model calibration
as well as for model validation.
For model building, we use a two step approach: in a first step, an OPC model is built using test patterns on a
planar, homogenous substrate; in a second step a WTPC model is calibrated, using results from simulated test
patterns on shallow trench isolation (STI) layer. This approach allows building models from experimental
data, including hybrid approaches where only experimental data from planar substrates is available and a
corresponding OPC model for the planar case can be retrofitted with capabilities for correcting wafer
topography effects.
We analyze the relevant effects and requirements for model building and validation as well as the
performance of fast WTPC models.
Interlayer self-aligning process for 22nm logic
Show abstract
Line/space dimensions for 22nm logic are expected to be ~35nm at ~70nm pitch for metal 1. However, the contacted
gate pitch will be ~90nm because of contact-to-gate spacing limited by alignment. A process for self-aligning contact to
gates and diffusions could reduce the gate pitch and hence directly reduce logic and memory cells sizes.
Self-aligned processes have been in use for many years. DRAMs have had bit-line and storage-node contacts defined in
the critical direction by the row-lines. More recently, intra-layer self-alignment has been introduced with spacer double
patterning, in which pitch division is accomplished using sidewall spacers defined by a removable core.[1] This
approach has been extended with pitch division by 4 to the 7nm node.[2]
The introduction of logic design styles which use strictly one-directional lines for the critical levels gives the opportunity
for extending self-alignment to inter-layer applications in logic and SRAMs. Although Gridded Design Rules have been
demonstrated to give area-competitive layouts at existing 90, 65, and 45nm logic nodes while reducing CD
variability[3], process extensions are required at advanced nodes like 22nm to take full advantage of the regular layouts.
An inter-layer self-aligning process has been demonstrated with both simulations and short-loop wafers. An extension of
the critical illumination step for active and gate contacts will be described.
Process window and integration results for full-chip model-based assist-feature placement at the 32 nm node and below
Show abstract
Model-based assist-feature (MBAF) placement has been shown to have considerable lithographic benefits vs. rule-based
assist-feature (RBAF) placement for advanced technology-node requirements. For very strong off-axis illumination,
MBAF-placement methods offer improved process window, especially for so-called forbidden pitch regions, and greatly
simplified tuning of AF-placement parameters. Historically, however, MBAF-placement methods had difficulties with
full-chip runtime, friendliness to mask manufacturing (e.g., mask rule checks or MRCs), and methods to ensure that
placed AFs do not print on-wafer. Therefore, despite their known limitations, RBAF-placement methods were still the
industry de facto solution through the 45 nm technology node. In this paper, we highlight recent manufacturability
advances for MBAFs by a detailed comparison of MBAF and RBAF methods. The MBAF method employed uses
Inverse Mask Technology (IMT) to optimize AF placement, size, shape, and software runtime, to meet the production
requirements of the 28 nm technology node and below. MBAF vs. RBAF results are presented for process window
performance, and MBAF vs. OPC results are presented for full-chip runtimes. The final results show that MBAF
methods have process-window advantages for technology nodes below 45 nm, with runtimes that are comparable to
OPC.
The role of mask topography effects in the optimization of pixelated sources
Show abstract
Ongoing technology node shrinkage requires the lithographic k1 factor to be pushed closer to its theoretical limit. The
application of customized illumination with multi-pole or pixelated sources has become necessary for improving the
process window. For standardized exploitation of this technique it is crucial that the optimum source shape and the
corresponding intensity distributions can be found in a robust and automated way. In this paper we present a pixelated
source optimization procedure and its results. A number of application cases are considered with the following
optimization goals: i) enhancement of the depth of focus, ii) improvement of through-pitch behavior, and iii) error
sensitivity reduction. The optimization procedure is performed with fixed mask patterns, but at multiple locations. To
reduce optical proximity errors, mask biasing is introduced. The optimization results are obtained for the pixelated
source shapes, analyzed and compared with the corresponding results for multi-pole shaped sources. Starting with the
45 nm node mask topography effects as well as light polarization conditions have significant impact on imaging
performance. Therefore including these effects into the optimization procedure has become necessary for advanced
process nodes. To investigate these effects, the advanced topographical mask illumination concept (AToMIC) for
rigorous and fast electromagnetic field simulation under partially coherent illumination is applied. We demonstrate the
impact of mask topography effects on the results of the source optimization procedure by comparison to corresponding
Kirchhoff simulations. The effects of polarized illumination sources are taken into account.
Poster Session: Double Patterning
Experimental study of effect of pellicle on optical proximity fingerprint for 1.35 NA immersion ArF lithography
Show abstract
Pellicles are mounted on the masks used in ArF lithography for IC manufacturing to ensure defect-free printing. The
pellicle, a thin transparent polymer film, protects the reticle from dust. But, as the light transmittance through the pellicle
has an angular dependency, the pellicle also acts as an apodization filter.
In the current work, we present both experimental and simulation results at 1.35 NA immersion ArF lithography showing
the influence of two types of pellicles on proximity and intra-die Critical Dimension Uniformity (CDU). To do so, we
mounted and dismounted the different pellicle types on one and the same mask. The considered structures on wafer are
compatible with the 32 nm logic node for poly and metal. For the standard ArF pellicle (thickness 830 nm), we
experimentally observe a distinct effect of several nm's due to the pellicle presence on both the proximity and the intradie
CDU. For the more advanced pellicle (thickness 280 nm) no signature of the pellicle on proximity or CDU could be
found.
By modeling the pellicle's optical properties as a Jones Pupil, we are able to simulate the pellicle effects with good
accuracy. These results indicate that for the 32 nm node, it is recommended to take the pellicle properties into account in
the OPC calculation when using a standard pellicle. In addition, simulations also indicate that a local dose correction can
compensate to a large extent for the intra-die pellicle effect. When using the more advanced thin pellicle (280 nm), no
such corrections are needed.
Achieving interferometric double patterning through wafer rotation
Show abstract
Owing to its simplicity and ability to produce line/space gratings with the highest contrast, interferometric
lithography is an ideal platform for developing novel double patterning materials and processes. However, lack of
sub-10 nm alignment in most interferometric systems impedes its application. In this paper, litho-etch-litho double
patterning on a two-beam interferometric system is achieved by converting Cartesian alignment into angular
alignment. By concentrically rotating the wafer in the second exposure, the interleaved region between the two
exposures allows for the evaluation of double patterning process and materials. Geometric analysis shows that
angular alignment has greatly relaxed requirements compared to the Cartesian alignment. It is calculated that for 22
nm double patterning technologies, rotation angle larger than 0.12 degree is sufficient to produce 1 μm long
frequency doubled line/space patterns with less than 10% CD variation.
Novel ATHENA mark design to enhance alignment quality in double patterning with spacer process
Show abstract
DPS (Double Patterning with Spacer) has been one of the most promising solutions in flash memory device
manufacturing. Apart from the process complexity inherent with the DPS process, the DPS process also requires more
engineering efforts on alignment technique compared to the single patterning. Since the traditional alignment marks
defined by the core mask has been altered hence the alignment mark recognition could be challenging for the subsequent
process layers.
This study characterizes the process influence on the traditional ASML VSPM (Versatile Scribelane Primary Marks)
alignment mark, and various types of sub-segmentations within VSPM marks were carried out to enable the alignment
and find out the best performing alignment marks. The design of the transverse and vertical sub-segmentations within the
VSPM marks is aimed to enhance the alignment signal strength and mark detectability. Alignment indicators of WQ
(Wafer Quality), MCC (Multiple Correlation Coefficient) and ROPI (Residual Overlay Performance Indicator) were
used to judge the alignment performance and stability. A good correlation was established between sub-segmentations
and wafer alignment signal strength.
Modeling of CD and placement error in multi-spacer patterning technology
Show abstract
The spacer patterning technique is an attractive way to fabricate patterns at resolutions far beyond the limits of
traditional optical lithography. In this paper, we have simulated film deposition and dry etch in spacer patterning at 32
nm and 22 nm designs using the commercially available TRAVIT software. Various resist thicknesses and profiles were
used, as well as process conditions for film deposition and dry etch. Dynamics of etch profiles, resulting profiles, and
critical dimensions (CDs) were extracted, as well as positional errors of features. It was found that the placement error
can be significant, especially when using thin resists. Multi-spacer patterning was also simulated. In the multispacer
technique, the spacer patterning processes were applied consequently, resulting in the reduction of the lithographic
pitch. The fabrication of 11 nm half-pitch lines were simulated using available lithographic techniques at 45 nm.
LENS (lithography enhancement toward nano scale): a European project to support double exposure and double patterning technology development
Show abstract
In 2009 a new European initiative on Double Patterning and Double Exposure lithography process development was
started in the framework of the ENIAC Joint Undertaking. The project, named LENS (Lithography Enhancement
Towards Nano Scale), involves twelve companies from five different European Countries (Italy, Netherlands, France,
Belgium Spain) and includes: IC makers (Numonyx and STMicroelectronics), a group of equipment and materials
companies (ASML, Lam Research srl, JSR, FEI), a mask maker (Dai Nippon Photomask Europe), an EDA company
(Mentor Graphics) and four research and development institutes (CEA-Leti, IMEC, Centro Nacional de
Microelectrónica, CIDETEC).
The LENS project aims to develop and integrate the overall infrastructure required to reach patterning resolutions
required by 32nm and 22nm technology nodes through the double patterning and pitch doubling technologies on existing
conventional immersion exposure tools, with the purpose to allow the timely development of 32nm and 22nm
technology nodes for memories and logic devices, providing a safe alternative to EUV, Higher Refraction Index Fluids
Immersion Lithography and maskless lithography, which appear to be still far from maturity.
The project will cover the whole lithography supply chain including design, masks, materials, exposure tools, process
integration, metrology and its final objective is the demonstration of 22nm node patterning on available 1.35 NA
immersion tools on high complexity mask set.
Self-aligned double patterning process for 32/32nm contact/space and beyond using 193 immersion lithography
Show abstract
State of the art production single print lithography for contact is limited to ~43-44nm half-pitch given the
parameters in the classic photolithography resolution formula for contacts in 193 immersion tool (k1 ≥ 0.3,
NA = 1.35, and λ = 193nm). Single print lithography limitations can be overcome by (1) Process /
Integration based techniques such as double-printing (DP), and spacer based self-aligned double patterning
(SADP), (2) Non-standard printing techniques such as electron-beam (eBeam), extreme ultraviolet
lithography (EUVL), nano-imprint Lithography (NIL). EUV tools are under development, while nanoimprint
is a developmental tool only. Spacer based SADP for equal line/space is well documented as
successful patterning technique for 3xnm and beyond. In this paper, we present an adaptation of selfaligned
double patterning process to 2-D regular 32/32nm contact/space array. Using SADP process, we
successfully achieved an equal contact/space of 32/32nm using 193 immersion lithography that is only
capable of 43-44nm resolvable half-pitch contact printing. The key and unique innovation of this work is
the use of a 2-D (x and y axis) pillar structure to achieve equal contact/space. Final result is a dense contact
array of 32nm half-pitch in 2-D structure (x and y axis). This is achieved on simplified stack of Substrate /
APF / Nitride.
Further transfer of this new contact pattern from nitride to the substrate (e.g., Oxide, APF, Poly, Si...) is
possible. The technique is potentially extendible to 22/22nm contact/space and beyond.
Poster Session: FreeForm and SMO
Novel continuously shaped diffractive optical elements enable high efficiency beam shaping
Show abstract
LIMO's unique production technology is capable to manufacture free form surfaces on monolithic arrays larger than 250
mm with high precision and reproducibility. Different kinds of intensity distributions with best uniformities or
customized profiles have been achieved by using LIMO's refractive optical elements. Recently LIMO pushed the limits
of this lens production technology and was able to manufacture first diffractive optical elements (DOEs) based on
continuous relief's profile.
Beside for the illumination devices in lithography, DOEs find wide use in optical devices for other technological
applications, such as optical communications, laser technologies and data processing. Classic lithographic technologies
lead to quantized (step-like) profiles of diffractive micro-reliefs, which cause a decrease of DOE's diffractive efficiency.
The newest development of LIMO's microlens fabrication technology allows us to make a step from free programmable
microlens profiles to diffractive optical elements with high efficiency. Our first results of this approach are demonstrated
in this paper. Diffractive beam splitters with continuous profile are fabricated and investigated. The results of profile
measurements and intensity distribution of the diffractive beam splitters are given. The comparison between theoretical
simulations and experimental results shows very good correlation.
Advances in DOE modeling and optical performance for SMO applications
Show abstract
The introduction of source mask optimization (SMO) to the design process addresses an urgent need for the 32nm node
and beyond as alternative lithography approaches continue to push out. To take full advantage of SMO routines, an
understanding of the characteristic properties of diffractive optical elements (DOEs) is required. Greater flexibility in the
DOE output is needed to optimize lithographic process windows. In addition, new and tighter constraints on the DOEs
used for off-axis illumination (OAI) are being introduced to precisely predict, control and reduce the effects of pole
imbalance and stray light on the CD budget. We present recent advancements in the modeling and optical performance
of these DOEs.
Abbe-PCA-SMO: microlithography source and mask optimization based on Abbe-PCA
Show abstract
Resolution enhancement technologies (RETs) are
so far widely proposed in improving the quality of
micro-lithography process. Latest methods such
as source mask optimization (SMO) and inverse
lithography technology (ILT) are gaining popularity
recently. Therefore, high speed simulator is in
strong demand for growing computational complexity
of RETs. In this paper, we demonstrate that
our previously proposed Abbe-PCA is highly efficient
for source configuring and pixel-based ILT
mask tuning.
Optimization on illumination source with design of experiments
Show abstract
In advanced photolithography process for manufacturing integrated circuits, the critical pattern sizes that need to be
printed on wafer are much smaller than the wavelength. Thus, source optimization (SO) techniques play a critical role in
enabling a successful technology node. However, finding an appropriate illumination configuration involves intensive
computation simulations. EDA vendors have been developing the pixelated source optimization tools that co-optimize
both source and mask for a set of patterns. As an alternative approach, we have introduced design of experiments (DOE)
methodology for parameterized source optimization to minimize computation efforts while achieving comparable CDU
control for given design patterns.
In this paper, we present a Response Surface Methodology (RSM) that simplifies the response function and achieves the
optimization goal on multiple responses. Results have shown that the optimal input settings identified by this approach
are comparable with the pixelated source optimization results.
Source-mask optimization (SMO): from theory to practice
Show abstract
Source Mask Optimization techniques are gaining increasing attention as RET computational lithography techniques in
sub-32nm design nodes. However, practical use of this technique requires careful considerations in the use of the
obtained pixilated or composite source and mask solutions, along with accurate modeling of mask, resist, and optics,
including scanner scalar and vector aberrations as part of the optimization process. We present here a theory-to-practice
case of applying ILT-based SMO on 22nm design patterns.
Poster Session: Laser
Partial spatial coherence in an excimer-laser lithographic imaging system
Show abstract
We have recently explored the Elementary Function method, previously presented by Wald et al (Proc. SPIE
59621G, 2005), and we have demonstrated under what circumstances this method can be used to reduce the
propagation calculations of partially coherent light to two dimensions. In this paper, we examine the methods
used to measure the spatial coherence of a light source in the literature. We present a method based on work
previously shown by Mejia et al (Opt Comm 273 (428-434), 2007) which uses an array of pinholes with one
degree of redundancy. We discuss the design of the pinhole array and present the results of some simulations.
Flexible and reliable high power injection locked laser for double exposure and double patterning ArF immersion lithography
Show abstract
ArF immersion technology is spotlighted as the enabling technology for the 45nm node and beyond. Recently, double
exposure technology is also considered as a possible candidate for the 32nm node and beyond. We have already released
an injection lock ArF excimer laser, the GT61A (60W/6kHz/10mJ/0.30pm) with ultra line-narrowed spectrum and
stabilized spectrum performance for immersion lithography tools with N.A.>1.3, and we have been monitoring the field
reliability data of our lasers used in the ArF immersion segment since Q4 2006.
In this report we show field reliability data of our GigaTwin series - twin chamber ArF laser products. GigaTwin series
have high reliability. The availability that exceeds 99.5% proves the reliability of the GigaTwin series.
We have developed tunable and high power injection-lock ArF excimer laser for double patterning, GT62A
(Max90W/6000Hz/Tunable power with 10-15mJ/0.30pm (E95)) based on the GigaTwin platform. A number of
innovative and unique technologies are implemented on GT62A.
- Support the latest illumination optical system
- Support E95 stability and adjustability
- Reduce total cost (Cost of Consumables, Cost of Downtime and Cost of Energy & Environment)
Laser bandwidth effect on overlay budget and imaging for the 45 nm and 32nm technology nodes with immersion lithography
Show abstract
The laser bandwidth and the wavelength stability are among the important factors contributing to the CD Uniformity
budget for a 45 nm and 32nm technology node NV Memory. Longitudinal chromatic aberrations are also minimized by
lens designers to reduce the contrast loss among different patterns. In this work, the residual effect of laser bandwidth
and wavelength stability are investigated and quantified for a critical DOF layer. Besides the typical CD implications we
evaluate the "image placement error" (IPE) affecting specific asymmetric patterns in the device layout. We show that
the IPE of asymmetric device patterns can be sensitive to laser bandwidth, potentially resulting in nanometer-level errors
in overlay. These effects are compared to the relative impact of other parameters that define the contrast of the
lithography image for the 45nm node. We extend the discussion of the contributions to IPE and their relative importance
in the 32 nm double-patterning overlay budget.
Laser spectrum requirements for tight CD control at advanced logic technology nodes
R. C. Peng,
H. J. Lee,
John Lin,
et al.
Show abstract
Tight circuit CD control in a photolithographic process has become increasingly critical particularly for advanced
process nodes below 32nm, not only because of its impact on device performance but also because the CD control
requirements are approaching the limits of measurement capability. Process stability relies on tight control of every
factor which may impact the photolithographic performance. The variation of circuit CD depends on many factors, for
example, CD uniformity on reticles, focus and dose errors, lens aberrations, partial coherence variation, photoresist
performance and changes in laser spectrum. Laser bandwidth and illumination partial coherence are two significant
contributors to the proximity CD portion of the scanner CD budget. It has been reported that bandwidth can contribute
to as much as 9% of the available CD budget, which is equivalent to ~0.5nm at the 32nm node. In this paper, we are
going to focus on the contributions of key laser parameters e.g. spectral shape and bandwidth, on circuit CD variation for
an advanced node logic device. These key laser parameters will be input into the photolithography simulator, Prolith, to
calculate their impacts on circuit CD variation. Stable though-pitch proximity behavior is one of the critical topics for
foundry products, and will also be described in the paper.
Lithography light source fault detection
Matthew Graham,
Erica Pantel,
Patrick Nelissen,
et al.
Show abstract
High productivity is a key requirement for today's advanced lithography exposure tools. Achieving targets for
wafers per day output requires consistently high throughput and availability. One of the keys to high availability
is minimizing unscheduled downtime of the litho cell, including the scanner, track and light source. From the
earliest eximer laser light sources, Cymer has collected extensive performance data during operation of the
source, and this data has been used to identify the root causes of downtime and failures on the system. Recently,
new techniques have been developed for more extensive analysis of this data to characterize the onset of typical
end-of-life behavior of components within the light source and allow greater predictive capability for identifying
both the type of upcoming service that will be required and when it will be required.
The new techniques described in this paper are based on two core elements of Cymer's light source data
management architecture. The first is enhanced performance logging features added to newer-generation light
source software that captures detailed performance data; and the second is Cymer OnLine (COL) which
facilitates collection and transmission of light source data. Extensive analysis of the performance data collected
using this architecture has demonstrated that many light source issues exhibit recognizable patterns in their
symptoms. These patterns are amenable to automated identification using a Cymer-developed model-based fault
detection system, thereby alleviating the need for detailed manual review of all light source performance
information. Automated recognition of these patterns also augments our ability to predict the performance
trending of light sources.
Such automated analysis provides several efficiency improvements for light source troubleshooting by providing
more content-rich standardized summaries of light source performance, along with reduced time-to-identification
for previously classified faults. Automation provides the ability to generate metrics based on a single light source,
or multiple light sources. However, perhaps the most significant advantage is that these recognized patterns are
often correlated to known root cause, where known corrective actions can be implemented, and this can therefore
minimize the time that the light source needs to be offline for maintenance. In this paper, we will show examples
of how this new tool and methodology, through an increased level of automation in analysis, is able to reduce
fault identification time, reduce time for root cause determination for previously experienced issues, and enhance
our light source performance predictability.
Poster Session: Lithography Optimization
Pattern deformation caused by deformed pellicle with ArF exposure
Show abstract
It will directly affects pellicle degradation, at the irradiated part of the pellicle, and make a sloped
pellicle surface and will act like a prism before change of phase or transmittance occurs, because the
energies of C, F and O single bondings composing the ArF pellicle film is quite smaller than the
energy of 193 nm ArF. Thus, outgoing light has information of smaller space than mask size. In order
to offer some tip to find the appearance of pellicle thinning caused defect, several types of pattern
deformation caused by pellicle degradation is studied.
Study for lithography techniques of hybrid mask shape of contact hole with 1.35NA polarized illumination for 28nm-node and below logic LSI
Show abstract
In this presentation, the advantage in the use of combination of polarized illumination and technique of
optimum shape mask for contact-hole lithography will be discussed. Both simulation and experimental work were
carried out to characterize performance of this technique. We confirmed that some polarized illuminations show
improvement in image contrast, MEEF, and DOF for nested contact-hole than non-polarized condition. In addition,
certain shape mask shows more improvement. Totally 63% DOF improvement from traditional square shape with nonpolarized
condition was confirmed. In final single exposure era for contact-hole, this result with techniques of hybrid
mask shape and polarized illumination is very attractive.
Poster Session: Mask Layout and OPC
Applications of MoSi-based binary intensity mask for sub-40nm DRAM
Show abstract
In this paper, we will present applications of MoSi-based binary intensity mask for sub-40nm DRAM with hyper-NA
immersion scanner which has been the main stream of DRAM lithography. Some technical issues will be reported for
polarized illumination and mask materials in hyper-NA imaging. One att.PSM (Phase Shift Mask) and three types of
binary intensity mask are used for this experiment; those are ArF att.PSM ( MoSi:760Å , transmittance 6% ),
conventional Cr ( 1030Å ) BIM (Binary Intensity Mask), MoSi-based BIM ( MoSi:590Å , transmittance 0.1%) and multi
layer ( Cr:740Å / MoSi:930Å ) BIM. Simulation and experiment with 1.35NA immersion scanner are performed to study
influence of mask structure, process margin and effect of polarization. Two types of DRAM cell patterns are studied; one
is a line and space pattern and the other is a contact hole pattern through mask structure. Various line and space pattern is
also through 38nm to 50nm half pitch studied for this experiment. Lithography simulation is done by in-house tool based
on diffused aerial image model. EM-SUITE is also used in order to study the influence of mask structure and
polarization effect through rigorous EMF simulation. Transmission and polarization effects of zero and the first
diffraction orders are simulated for both att.PSM and BIM. First and zero diffraction order polarization are shown to be
influenced by the structure of masking film. As pattern size on mask decreases to the level of exposure wavelength,
incident light will interact with mask pattern, thereby transmittance changes for mask structure. Optimum mask bias is
one of the important factors for lithographic performance. In the case of att.PSM, negative bias shows higher image
contrast than positive one, but in the case of binary intensity mask, positive bias shows better performance than negative
one. This is caused by balance of amplitude between first diffraction order and zero diffraction order light.1
Process windows and mask error enhancement factors are measured with respect to several types of mask structure. In
the case of one dimensional line and space pattern, MoSi-based BIM and conventional Cr BIM show the best
performance through various pitches. But in the case of hole DRAM cell pattern, it is difficult to find out the advantage
of BIM except of exposure energy difference. Finally, it was observed that MoSi-based binary intensity mask for sub-
40nm DRAM has advantage for one dimensional line and space pattern.
OMOG mask topography effect on lithography modeling of 32nm contact hole patterning
Show abstract
The topography effect of Opaque MoSi on Glass (OMOG) mask on 32nm contact hole patterning is analyzed by
examining the difference of image intensity profile between thin mask approximation and rigorous electro-magnetic
field (EMF) simulation. The study shows that OMOG topography results in more than a 20% decrease of image intensity.
The impact of OMOG mask topography on lithography modeling of a 32nm contact hole process is explored by fitting
lithography simulation with experimental results for both thin mask model and EMF model. This study shows that thin
mask modeling is a good approximation of EMF modeling for a contact pitch larger than 120nm, but yields about 10nm
prediction error for a 110nm contact pitch. Thin mask modeling is shown to be inaccurate in predicting critical
dimension of contact arrays with sub-resolution assistant feature (SRAF). In addition, thin mask modeling is too
pessimistic in predicting SRAF printability. In contrast, EMF model shows good prediction of contact arrays with and
without sub-resolution feature. A modified thin mask modeling technique utilizing an effective SRAF size is proposed
and verified with experimental results.
Fast-converging iterative gradient decent methods for high pattern fidelity inverse mask design
Show abstract
Convergence speed and local minimum issue have been the major issues for inverse lithography. In this paper, we
propose an inverse algorithm that employs an iterative gradient-descent method to improve convergence and reduce the
Edge Placement Error (EPE). The algorithm employs a constrained gradient-based optimization to attain the fast
converging speed, while a cross-weighting technique is introduced to overcome the local minimum trapping.
Radial segmentation approach for contact hole patterning in 193 nm immersion lithography
Show abstract
In this paper, a novel optical proximity correction (OPC) method for contact hole patterning is demonstrated.
Conventional OPC for contact hole patterning involves dimensional biasing, addition of serifs, and sub resolution assist
features (SRAF). A square shape is targeted in the process of applying conventional OPC. As dimension of contact hole
reduces, features on mask appear to be circular due to strong diffraction effect. The process window enhancement of
conventional OPC approach is limited. Moreover, increased encounters of side lobes printing and missing contact holes
are affecting the process robustness. A new approach of changing the target pattern from square to circular is proposed in
this study. The approach involves a change in shape of mask openings and a radial segmentation method for proximity
correction. The contact holes patterns studied include regular contact holes array and staggered contact holes. Process
windows, critical dimension (CD) and aerial image contrast is compared to investigate the effectiveness of the proposed
contact holes patterning approach relative to conventional practice.
Binary mask optimization for forward lithography based on boundary layer model in coherent systems
Show abstract
Recently, a set of generalized gradient-based optical proximity correction (OPC) optimization methods have been
developed to solve for the forward and inverse lithography problem under the thin-mask assumption, where the
mask is considered a thin 2-D object. However, as the critical dimension printed on the wafer shrinks into the
subwavelength regime, thick-mask effects become prevalent and thus these effects must be taken into account in
OPC optimization methods. OPC methods derived under the thin-mask assumption have inherent limitations
and perform poorly in the subwavelength scenario. This paper focuses on developing model-based forward binary
mask optimization methods which account for the thick-mask effects of coherent imaging systems. The boundary
layer (BL) model is exploited to simplify and characterize the thick-mask effects, leading to a computationally
efficient OPC method. The BL model is simpler than other thick-mask models, treating the near field of the mask
as the superposition of the interior transmission areas and the boundary layers. The advantages and limitations
of the proposed algorithm are discussed and several illustrative simulations are presented.
Improvement in process window aware OPC
Show abstract
In this paper, we present some important improvements on our process window aware OPC (PWA-OPC). First, a CDbased
process window checking is developed to find all pinching and bridging errors; Secondly, a rank ordering method
is constructed to do process window correction; Finally, PWA-OPC can be applied to selected areas with different
specifications for different feature types. In addition, the improved PWA-OPC recipe is constructed as sequence of
independent modules, so it is easy for users to modify its algorithm and build original IPs.
A non-delta-chrome OPC methodology for process models with three-dimensional mask effects
Show abstract
Delta-chrome optical proximity correction (OPC) has been widely adopted in lithographic patterning for semiconductor
manufacturing. During the delta-chrome OPC iteration, a predetermined amount of chrome is added or subtracted from
the mask pattern. With this chrome change, the change of exposure intensity error (IE) or the change of edge placement
error (EPE) between the printed contour and the target pattern is then calculated based on standard Kirchhoff
approximation. Linear approximation is used to predict the amount of the proper chrome change to remove the
correction error. This approximation can be very fast and effective, but must be performed iteratively to capture
interactions between chrome changes. As integrated circuit (IC) design shrinks to the deep sub-wavelength regime,
previously ignored nonlinear process effects, such as three-dimensional (3D) mask effects and resist development effects,
become significant for accurate prediction and correction of proximity effects. These nonlinearities challenge the deltachrome
OPC methodology. The model response to mask pattern perturbation by linear approximation can be readily
computed but inaccurate. In fact, computation of the mask perturbation response becomes complex and expensive. A
non-delta-chrome OPC methodology with IE-based feedback compensation is proposed. It determines the amount of the
proper chrome change based on IE without intensive computation of mask perturbation response. Its effectiveness in
improving patterning fidelity and runtime is examined on a 50-nm practical circuit layout. Despite the presence and the
absence of nonlinear 3D mask effects, our results show the proposed non-delta-chrome OPC outperforms the deltachrome
one in terms of patterning fidelity and runtime. The results also demonstrate that process models with 3D mask
effects limit the use of delta-chrome OPC methodology.
A new etch-aware after development inspection (ADI) technique for OPC modeling
Show abstract
This paper presents a new etch-aware after development inspection (ADI) model with an inverse etch bias filter. We
model the etch bias as a function of pattern geometry parameters, and we introduce it to the ADI model by means of an
inverse bias matrix that works in conjunction with an ADI specification related matrix. The inverse bias filter tunes the
ADI model to be highly correlated to the etch effects and provides simplified and designable inputs to the after etch
inspection (AEI) model and hence improves its performance over the staged modeling flow. In addition, the inverse bias
filter creates a model based rule table for design retargeting. Some of the etch effects are corrected by the inverse bias
filter as the lithography model is calibrated, thus speeding up and simplifying the etch AEI model, while maintaining
lithography ADI model with a good accuracy.
Wafer LMC accuracy improvement by adding mask model
Show abstract
Mask effect will be more sensitive for wafer printing in high-end technology. For advance only using current wafer
model can not predict real wafer behavior accurately because it do not concern real mask performance (CD error, corner
rounding..).
Generally, we use wafer model to check whether our OPC results can satisfy our requirements (CD target). Through
simulation on post-OPC patterns by using wafer model, we can check whether these post-OPC patterns can meet our
target. Hence, accuracy model can help us to predict real wafer printing results and avoid OPC verification error.
To Improve simulation verification accuracy at wafer level and decrease false alarm. We must consider mask effect
like corner rounding and line-end shortening...etc in high-end mask. UMC (United Microelectronics Corporation) has
cooperated with Brion and DNP to evaluate whether the wafer LMC (Lithography Manufacturability Check) (Brion hot
spots prediction by simulation contour) accuracy can be improved by adding mask model into LMC verification
procedure. We combine mask model (DNP provide 45nm node Poly mask model) and wafer model (UMC provide 45nm
node Poly wafer model) then build up a new model that called M-FEM (Mask Focus Energy Matrix model) (Brion
fitting M-FEM model). We compare the hotspots prediction between M-FEM model and baseline wafer model by LMC
verification. Some different hotspots between two models were found. We evaluate whether the hotspots of M-FEM is
more close to wafer printing results.
Study of model based etch bias retarget for OPC
Show abstract
Model based Optical proximity correction is usually used to compensate for the pattern distortion during the microlithography
process. Currently, almost all the lithography effects, such as the proximity effects from the limited NA,
the 3D mask effects due to the shrinking critical dimension, the photo resist effects, and some other well known
physical process, can all be well considered into modeling with the OPC algorithm. However, the micro-lithography
is not the final step of the pattern transformation procedure from the mask to the wafer. The etch process is also a very
important stage. It is well known that till now, the etch process still can't be well explained by physics theory. As we
all know, the final critical dimension is decided by both the lithography and the etch process. If the etch bias, which is
the difference between the post development CD and the post etch CD, is a constant value, it will be simple to control
the final CD. But unfortunately this is always not the case. For advanced technology nodes with shrinking critical
dimension, the etch loading effect is the dominate factor that impacts the final CD control. And some people tried to
use the etch-based model to do optical proximity correction, but one drawback is the efficiency of the OPC running
will be hurt. In this paper, we will demonstrate our study on the model based etch bias retarget for OPC.
Intra field CD uniformity correction by Scanner Dose MapperTM using Galileo® mask transmission mapping as the CDU data source
Show abstract
Intra-field CD variation can be corrected through wafer CD feedback to the scanner in what is called the Dose Mapper
(DOMA) process. This will correct errors contributed from both reticle and scanner processes. Scanner process errors
include uncorrected illumination non uniformities and projection lens aberration. However, this is a tedious process
involving actual wafer printing and representative CD measurement from multiple sites. A novel method demonstrates
that measuring the full-field reticle transmission with Galileo® can be utilized to generate an intensity correction file for
the scanner DOMA feature. This correction file will include the reticle transmission map and the scanner CD signature
that has been derived in a preliminary step and stored in a database. The scanner database is periodically updated after
preventive maintenance with CD from a monitoring reticle for a specific process. This method is easy to implement as no
extra monitoring feature is needed on the production reticle for data collection and the new reticle received can be
immediately implemented to a production run without the need for wafer CD data collection. Correlation of the reticle
transmission and wafer CD measurement can be up to 90% depending on the quality of CD data measurements and
repeatability of the scanner signature. CD mapping on the Galileo® tool takes about 20 minutes for 1500 data points
(there is no limit to the number of measurement point on the Galileo®), which is more than enough for the DOMA
process. Turn Around Time (TAT) for the whole DOMA process can thus be shortened from 3 Days to about an hour
with significant savings in time and resources for the fab.
Poster Session: Materials
Metamaterials for enhancement of DUV lithography
Show abstract
The unique properties of metamaterials, namely their negative refractive index, permittivity, and permeability, have
gained much recent attention. Research into these materials has led to the realization of a host of applications that may
be useful to enhance optical nanolithography, such as a high pass pupil filter based on an induced transmission filter
design, or an optical superlens. A large selection of materials has been examined both experimentally and theoretically
through wavelength to verify their support of surface plasmons, or lack thereof, in the DUV spectrum via the attenuated
total reflection (ATR) method using the Kretschmann configuration. At DUV wavelengths, materials that were
previously useful at mid-UV and longer wavelengths no longer act as metamaterials. Composites bound between
metallic aluminum and aluminum oxide (Al2O3) exhibit metamaterial behavior, as do other materials such as tin and
indium. This provides for real opportunities to explore the potential of the use of such materials for image enhancement
with easily obtainable materials at desirable lithographic wavelengths.
Toward a consistent and accurate approach to modeling projection optics
Show abstract
This paper presents a consistent and modularized approach to modeling projection optics. Vector nature of light
and polarization effect are considered from the very beginning at source, through mask and projection lens down
into film stack. High-NA and immersion effect are also included. Of particular interest is the formulation of
a modularized framework for computing optical images that allows various mask models (a thin-mask model,
an empirical approximate mask model, or a rigorous mask 3D solver) to be used. We demonstrate that under
Kirchoff thin-mask assumption our formulation is the same as Smythe formula. A compact film-stack model is
formulated. The formulation is first presented in Abbe's source integration approach and then reformulated in
Hopkins' TCC approach which allows for a SVD decomposition, which is computationally more efficient for a
fixed optical setting.
High fluence testing of optical materials for 193-nm lithography extensions applications
Show abstract
As next generation immersion lithography, combined with double patterning, continues to shrink feature sizes, the
industry is contemplating a move to non-chemically amplified resists to reduce line edge roughness. Since these resists
inherently have lower sensitivities, the transition would require an increase in laser exposure doses, and thus, an increase
in incident laser fluence to keep the high system throughput.
Over the past several months, we have undertaken a study at MIT Lincoln Laboratory to characterize performance
of bulk materials (SiO2 and CaF2) and thin film coatings from major lithographic material suppliers under continuous
193-nm laser irradiation at elevated fluences. The exposures are performed in a nitrogen-purged chamber where samples
are irradiated at 4000 Hz at fluences between 30 and 50 mJ/cm2/pulse. For both coatings and bulk materials, in-situ laser
transmission combined with in-situ laser-induced fluorescence is used to characterize material performance. Potential
color center formation is monitored by ex-situ spectrophotometry. For bulk materials, we additionally measure spatial
birefringence maps before and after irradiation. For thin film coatings, spectroscopic ellipsometry is used to obtain
spatial maps of the irradiated surfaces to elucidate the structural changes in the coating.
Results obtained in this study can be used to identify potential areas of concern in the lens material performance if
the incident fluence is raised for the introduction of non-chemically amplified resists. The results can also help to
improve illuminator performance where such high fluences already occur.
Poster Session: Modeling
Stepwise fitting methodology for optical proximity correction modeling
Show abstract
Optical proximity correction (OPC) models consist of a large number of components and parameters that must be
optimized during model fitting process for best possible matching with empirical data. There are several optimization
methods for OPC models. Most of the published methods, if not all, are based on a global optimization method, where
all the model parameters are regressed in their search regions to provide a global minimum of the OPC model. However,
there are potential risks of overweighting one OPC model component versus another and as a result loosing the
physicality of the final model, which reduces model quality in terms of fit and prediction. In this work a stepwise fitting
methodology based on staged optimization of the OPC model components is presented. Components are added into an
OPC model in the order of more physical to less physical, starting from mask and optics. In each optimization stage a
component is optimized using global regression methods and then the optimized parameters are locked and not regressed
during further model optimization. The effectiveness of this approach in terms of accurate correction and comparison
with global search regression method is demonstrated through computational experiments.
Automatic numerical determination of lateral influence functions for fast-CAD
Show abstract
This paper presents kernel convolution with pattern matching (KCPM), which is an updated version of fast-CAD
pattern matching for assessing lithography process variations. With KCPM, kernels that capture lateral feature
interaction between features due to process variations are convolved with a mask layout to calculate a match
factor, which indicates approximate change in intensity at the target location. The algorithm incorporates
a custom source, a mask with electromagnetic effects, and an arbitrary pupil function. For further accuracy
improvement, we introduce a source splitting technique. Though the evaluation speed is decreased, R2 correlation
of the match factor and change in intensity is increased. Results are shown with R2 correlation as high as 0.99 for
nearly coherent and annular illumination. Additionally, with a numerical aperture of 1.35, unbalanced quadrapole
illumination, 10mλ RMS random aberration in projection optics and complex mask with EMF effects included,
R2 correlation of more than 0.87 is achieved. This process is extremely fast (40μs per location) making it valuable
for a wide range of applications, most commonly hot spot detection and optimization.
Aerial image model and application to aberration measurement
Show abstract
In this paper, we present a streamlined aerial image model that is linear with respect to projection optic's aberrations. The
model includes the impact of the NA, partial coherence, as well as the aberrations on the full aerial image as measured on
an x-z grid. The model allows for automatic identification of image's primary degrees of freedom, such as bananicity and
Y-icity among others. The model is based on physical simulation and statistical analysis. Through several stages of
multivariate analysis a reduced dimensionality description of image formation is obtained, using principal components
on the image side and lumped factors on the parameter side. The modeling process is applied to the aerial images
produced by the alignment sensor in a 0.75NA ArF scanner while the tool is integration mode and aberration levels are
high. Approximately 20 principal components are found to have a high signal-to-noise ratio in the image set produced
by varying illumination conditions and considering aberrations represented by 33 Zernike polynomials. The combined
coefficients are extracted and the measurement repeatability is presented. The analysis portion of the model is then
applied to the measured coefficients and a subset of projection lens' aberrations are solved for.
Methods for benchmarking photolithography simulators: part V
Show abstract
As the semiconductor industry moves to double patterning solutions for smaller feature sizes,
photolithography simulators will be required to model the effects of non-planar film stacks in the
lithography process. This presents new computational challenges for modeling the exposure, post-exposure
bake (PEB), and development steps. The algorithms are more complex, sometimes requiring very different
formulations than in the all-planar film stack case. It is important that the level of accuracy of the models
be assessed.
For these reasons, we have extended our previous papers in which we proposed standard benchmark
problems for computations such as rigorous EMF mask diffraction, optical imaging, PEB, and development
[1-4]. In this paper, we evaluate the accuracy of the new PROLITH wafer topography models. The
benchmarks presented here pertain to the models (and their associated outputs) most affected by the switch
to non-planar film stacks: imaging at the wafer (image intensity in-media) and PEB (blocked polymer
concentration). Closed-form solutions are formulated with the same assumptions used in the model
implementation. These solutions can be used as an absolute standard and compared against a simulator. The
benchmark can then be used to judge the simulator, in particular as it applies to speed vs. accuracy
tradeoffs.
Selective inverse lithography methodology
Show abstract
Selective Inverse Lithography (ILT) approach recently introduced by authors [1] has proven to be advantageous for
extending life-span of lower-NA 193nm exposure tools to achieve satisfactory 65nm contact layer patterning. We intend
to find an alternative solution without the need for higher NA tools and advanced light source optimization. In this paper
we explore possible region selection criteria for ILT application based on pitch for a full chip optical proximity
correction (OPC). Through studying the impact of a given selection criteria on runtime, resolution, and the process
window we recommend an optimal combination. With a justified choice of an ILT selection criteria, we construct a
hybrid OPC flow comprising a recursive sequence of direct assist features generation, selective ILT application, layout
repair, model OPC and hot spots screening.
CDU linear model based on aerial image principal components
Show abstract
In this paper, we present an image quality model and a process window model that is linear or quadratic with respect to
common pupil space errors. Similar to other CDU models in its simplicity, our model expands linear representation to
comprehensive image quality specs in a large focus-dose grid. With this model we identify corrections to the full
Bossung curve or process window shapes that are proportional to aberration levels.
Impact of illumination on model-based SRAF placement for contact patterning
Show abstract
Sub-Resolution Assist Features (SRAFs) have been used extensively to improve the process latitude for
isolated and semi-isolated features in conjunction with off-axis illumination. These SRAFs have typically
been inserted based upon rules which assign a global SRAF size and proximity to target shapes. Additional
rules govern the relationship of assist features to one another, and for random logic contact layers, the
overall ruleset can become rather complex. It has been shown that model-based placement of SRAFs for
contact layers can result in better worst-case process window than that obtained with rules, and various
approaches have been applied to affect such placement. The model comprehends the specific illumination
being used, and places assist features according to that model in the optimum location for each contact
hole. This paper examines the impact of various illumination schemes on model-based SRAF placement,
and compares the resulting process windows. Both standard illumination schemes and more elaborate
pixel-based illumination pupil fills are considered.
A novel decomposition of source kernel for OPC modeling
Show abstract
The accuracy and efficiency of OPC (Optical Proximity Correction) modeling have become paramount important at the
low k1 lithography. However the accuracy of OPC model has to compromise with the efficiency of model calibration
and pattern correction, since the model accuracy is usually improved by using more kernels to represent the model but
the runtime of model setup and pattern correction also increase as kernel count increasing.
A novel decomposition of source kernel for OPC model calibration was presented in this study to maintain the model
accuracy and preserve the OPC runtime at acceptable level. Firstly, the source kernel was decomposed into multiple subsource
kernels and then the magnitude of electric field for each decomposed sub-source was modulated in frequency
domain. Finally, the resultant source can be the combination of many different sub-sources to represent the tool-specific
characteristics. The model accuracy, model stability and modeling runtime were compared among decomposed source,
ideal source and measured source models. The results showed modeling residual RMS error, predictive capability of
decomposed source can be reduced to be comparable to measured source and superior to the ideal source. As for the
modeling efficiency, the decomposed source is up to 5 times faster than the measured source but just few percentages
slower than the ideal source approach.
Methods for assessing empirical model parameters and calibration pattern measurements
Show abstract
Assessing an empirical model for ILT or OPC on a full-chip scale is a non-trivial task because the model's fit to
calibration input data must be balanced against its robust prediction on wafer prints. When a model does not fit the
calibration measurements well, we face the difficult choice between readjusting model parameters and re-measuring
wafer CDs of calibration patterns. On the other hand, when a model does fit very well, we will still likely have the
nagging suspicion that an overfitting might have occurred. Here we define a few objective and quantitative methods for
model assessment. Both theoretical foundation and practical use are presented.
A simplified reaction-diffusion system of chemically amplified resist process modeling for OPC
Show abstract
As semiconductor manufacturing moves to 32nm and 22nm technology nodes with 193nm water immersion
lithography, the demand for more accurate OPC modeling is unprecedented to accommodate the diminishing
process margin. Among all the challenges, modeling the process of Chemically Amplified Resist (CAR) is a
difficult and critical one to overcome. The difficulty lies in the fact that it is an extremely complex physical and
chemical process. Although there are well-studied CAR process models, those are usually developed for TCAD
rigorous lithography simulators, making them unsuitable for OPC simulation tasks in view of their full-chip
capability at an acceptable turn-around time. In our recent endeavors, a simplified reaction-diffusion model capable
of full-chip simulation was investigated for simulating the Post-Exposure-Bake (PEB) step in a CAR process. This
model uses aerial image intensity and background base concentration as inputs along with a small number of
parameters to account for the diffusion and quenching of acid and base in the resist film. It is appropriate for OPC
models with regards to speed, accuracy and experimental tuning. Based on wafer measurement data, the parameters
can be regressed to optimize model prediction accuracy. This method has been tested to model numerous CAR
processes with wafer measurement data sets. Model residual of 1nm RMS and superior resist edge contour
predictions have been observed. Analysis has shown that the so-obtained resist models are separable from the effects
of optical system, i.e., the calibrated resist model with one illumination condition can be carried to a process with
different illumination conditions. It is shown that the simplified CAR system has great potential of being applicable
to full-chip OPC simulation.
Improved process window modeling techniques
Show abstract
The continuous reduction of device dimensions and densities of integrated circuits increases the demand for accurate
process window models used in optical proximity correction. Beamfocus and dose are process parameters that have
significant contribution to the overall critical feature dimension error budget. The increased number of process
conditions adds to the model calibration time since a new optical model needs to be generated for each focus condition.
This study shows how several techniques can reduce the calibration time by appropriate selection of process conditions
and features while maintaining good accuracy. Experimental data is used to calibrate models using a reduced set of data.
The resulting model is compared with the model calibrated using the full set of data. The results show that using a
reduced set of process conditions and using process sensitive features can yield a model as accurate as the model
calibrated using the full set but in a shorter amount of time.
Poster Session: Tools and Process Control
Lithography cycle time improvements using short-interval scheduling
Show abstract
Partially and fully automated semiconductor manufacturing facilities around the world have employed automated real-time dispatchers (RTD) as a critical element of their factory management solutions. The success of RTD is attributable to a detailed and extremely accurate data base that reflects the current state of the factory, consistently applied dispatching policies and continuous improvement of these dispatching policies.
However, many manufactures are now reaching the benefit limits of pure dispatching-based or other "heuristic-only" solutions. A new solution is needed that combines locally optimized short-interval schedules with RTD policies to target further reductions in product cycle time.
This paper describes an integrated solution that employs four key components:
1. real-time data generation,
2. simulation-based prediction,
3. locally optimized short-interval scheduling, and
4. schedule-aware real-time dispatching.
The authors describe how this solution was deployed in lithography and wet / diffusion areas, and report the resulting improvements measured.
Topography-aware BARC optimization for double patterning
Show abstract
This paper aims at identifying appropriate bottom anti-reflective coatings (BARCs) for double patterning techniques
such as Litho-Freeze-Litho-Etch (LFLE). A short introduction into the employed optimization methodology, including
variables, figures of merit, models and optimization algorithms is given. A study on the impact of a refractive index
modulation caused by the first lithographic step is presented. Several optimization surveys taking the index modulation
into account are set forth, and the results are discussed. In addition to optimization procedures aiming at optimizing one
litho step at a time, a co-optimization study for both litho steps is proposed. Finally, two multi-objective optimization
procedures that allow for a post-optimization exploration and selection of optimum solutions are presented. Numerous
solutions are discussed in terms of their anti-reflectance behavior and their manufacturing feasibility.
A novel method to reduce wafer topography effect for implant lithography process
Show abstract
Wafer topography structures in the implant lithography process, which include the shallow trench isolation and the poly
gate, can result into a severe degradation of the resist profile and significant critical dimension variation. While bottom
anti-reflective coating (BARC) is not suitable for the implant lithography because of the plasma induced substrate
damage, developable bottom anti-reflective coating (DBARC) is now the most promising solution to eliminate wafer
topography effects for the implant layer lithography. Currently, some challenges still remain to be solved and DBARC is
not ready for mass production yet. In this study, a novel method is proposed to improve wafer topography effects by use
of sub-resolution features. Compared with DBARC, this new approach is much more cost effective. Numerical study by
use of Sentaurus-Litho simulation tool shows that the new method is promising and deserves more comprehensive
investigation.
Immersion BARC for hyper NA applications II
Yu-Chin Huang,
Kai-Lin Chuang,
Tsung-Ju Yeh,
et al.
Show abstract
Reflectivity control through angle is challenging at hyper NA, especially for Logic
devices which have various pitches in the same layer. A multilayer antireflectant system
is required to control complex reflectivity resulting from various incident angles. In our
previous works, we showed the successful optimization of multilayer antireflectant
systems at hyper NA for BEOL layers. In this paper, we show the optimization of new
multilayer bottom anti-reflectant systems to meet new process requirements at 28nm
node Logic device. During the manufacturing process, rework process is necessary when
critical dimension or overlay doesn't meet the specifications. Some substrates are
sensitive to the rework process. As a result, litho performance including the line width
roughness (LWR) could change. The optimizations have been done on various stack
options to improve LWR. An immersion tool at 1.35NA was used to perform lithography
tests. Simulation was performed using ProlithTM software.
Methods and challenges to extend existing dry 193nm medium NA lithography beyond 90nm
Show abstract
In order to fulfill the demands of further shrinkage of our mature 90nm logic litho technologies under the constraints of
costs and available toolsets in a 200mm fab environment, a project called "Push to the Limits" was started. The aim ís
to extend the lifetime and capabilities of existing dry 193nm litho toolsets with medium to low numerical aperture,
coupled with the availability of materials and processes which were known to help up CD miniaturization and to shrink
the 90nm logic litho process as far as possible. To achieve this, various options were explored and evaluated, e.g.
optimization of illumination conditions, evaluation of new materials, usage of advanced RET techniques (OPC, LfD,
DfM and ILT) and resolution enhancement by chemical shrink (RELACS®). In this project we demonstrate how we were
able to extend our existing 90nm technology capability, down close to 65nm node litho requirements on most critical
layers. We present overall result in most critical layer generally and specifically on most difficult layer of contact.
Typical contact litho target at 100nm region was enabled, while realization of 90nm ADI target is possible with addition
of new process materials.
Examining reflectivity criterion for various ArF lithography
Show abstract
When the feature size keep shrinking to 4Xnm, ArF lithography has already proceed to immersion process and became
mature enough. There is an important factor that will obviously influence photo process window in the initial phase
development is the optical reflection from imperfect substrate design. From previous experience, reflection would be
optimized to fine level by adjusting TARC (Top Anti-Reflection Coating) or BARC (Bottom Anti-Reflection Coating)
thickness through index of reflectivity. However, actual criteria of reflectivity for various ArF lithography process are
unlikely the same, e.g. different system type (wet/dry), node (feature size), illumination type, or even substrate effect,
and also need to be examined to retain a decent process window. In this paper, experimental result of various abovementioned
ArF process have been compared with reflectivity index from prolith simulation engine, and distinctly
clarified criteria of reflectivity for each case. Furthermore, effects of reflection to several optics caused patterning-related
results, e.g. IDB (Iso-Dense Bias), OPC (Optical Proximate Correction) accuracy, will also be discussed. The result also
shows severe criterion of reflection is requested as feature size getting smaller to 4Xnm node, and RET-applied
(Resolution Enhancement Technology) process has opposite result on it. From experimental results, IDB has been
obviously affected by reflection and become one important factor that influences reflection criterion examination.
CD-uniformity for 45nm NV memory on product-stack
Show abstract
CD uniformity budget for a 45-nm NV memory device requires the analysis and compensation of each single contributor
factor. A dedicated simulation tool "CDU Predictor" helps to quantify the impact of main scanner and process factors
for a comprehensive study of the CD Uniformity for an ideal flat wafer. However this analysis could under estimate the
real CD distribution on a real production wafer if artefacts induced by thin-film effects and underling device topography
significantly increase the contribution of the optical leveling-device to the total focus-error and hence spread the CD
distribution for processes with low DOF. Such artefacts can be eliminated by application of an offset-map obtained by
probing the mechanical top-surface of the resist-stack with an AirGauge (AirGaugeImprovedLEvelling, AGILE). The
systematic variation of CD across the wafer, no matter whether due to fingerprints of the reticle, the device-topography,
the track-process or the exposure-tool, can be mapped into dose-corrections for compensation (DoseMapper). We discuss
an experimental case with a combination of both tools for an effective CD Uniformity optimization.
Analysis of photoresist edge bead removal using laser light and gas
Show abstract
Wafer edge defects are currently considered a major problem as they negatively impact device yields in integrated circuit
manufacturing, especially in immersion lithography. A primary source of edge defects is from particles of photoresist
originating from the edge bead of resist caused by spin coating. In this paper, photoresist edge bead removal (EBR) is
studied in a series of experiments using a laser and gas cleaning system. One goal of the experiments was to reduce the
edge exclusion by gradually reducing the area cleaned by the laser and gas system. Reduction of EBR width will increase
die yield. A number of varying exposure algorithms were tested, and are described along with microscope and SEM
photos of the resulting edge geometry and surface condition. Another goal of these experiments is time-efficient removal
of thick edge beads, a problem for conventional expose/develop methods. A matrix of varying laser parameters and gas
types was run to produce a best-known-method (BKM) to meet these goals.