Proceedings Volume 5042

Design and Process Integration for Microelectronic Manufacturing

cover
Proceedings Volume 5042

Design and Process Integration for Microelectronic Manufacturing

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 10 July 2003
Contents: 8 Sessions, 35 Papers, 0 Presentations
Conference: Advanced Microelectronic Manufacturing 2003
Volume Number: 5042

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Advanced RETs
  • Technology Modeling, CAD, and Optimization
  • DFM and Information Management
  • Design, Design Objectives, and Validation
  • Devices, Layouts, and Patterning
  • DFM and Information Management
  • Image Quality and Design Rules
  • Poster Session
  • Technology Modeling, CAD, and Optimization
  • Poster Session
  • Design, Design Objectives, and Validation
  • Poster Session
  • In-Depth Seminar
  • Design, Design Objectives, and Validation
Advanced RETs
icon_mobile_dropdown
Layout optimization at the pinnacle of optical lithography
Lars W. Liebmann, Greg A. Northrop, James Culp, et al.
This paper attempts to shed more light on the widely acknowledged need to improve the manufacturabilty of itnegrated chip layouts for sub-100nm technology nodes. After reviewing the parametric performance targets and tiem constaints of the 65nm and 45nm technology nodes, the paper elaborates on the principles of popular resolution enhancement techniques, their impact on chip layouts, and the opportunity for borad layout improvement which they afford. Finally, the viability and feasibility of layout optimization based on a design-for-manufacturability mantra and enabled through "radically design restrictions" is explored.
Dense only phase-shift template lithography
The steady move towards feature sizes ever deeper in the subwavelength regime has necessitated the increased use of aggressive resolution enhancement techniques (RET) in optical lithography. The use of ever more complex RET methods including strong phase shift masks and complex OPC has led to an alarming increase in the cost of photomasks, which cannot be amortized by many types of semiconductor applications. This paper reviews an alternative RET approach, dense template phase shift lithography, that can substantially reduce the cost of optical RET. The use of simple dense grating templates can also eliminate serious problems encountered in subwavelength lithography including optical proximity and spatial frequency effects. We show that, despite additional design rule restrictions and the use of multiple exposures per critical level, this type of lithography approach can make economic sense depending on the number of wafers produced per critical photomask.
Assessing technology options for 65-nm logic circuits
Dipankar Pramanik, Michel L. Cote, Kevin Beaudette, et al.
The 2001 ITRS roadmap identified the need for tight coupling of design technology with manufacturing technology in order to ensure the successful production of circuits fabricated at the 65nm technology node. The design creation process for 65nm needs to efficiently explore the interaction between device, cell design and manufacturability. Using fast simulation tools for device and lithography simulation and an automated tool for standard cell generation, various process and cell architectural options were investigated. The average and standard deviation of line width had to be matched to the type of application because of the direct relationship between leakage current and performance. Best process latitude for poly line widths is achieved with Full Phase technology. It is shown that by matching design rules to the Full Phase capabilities and using automated layout tools, manufacturabilty could be optizmed without hurting density or performance.
Generalization of the photo process window and its application to OPC test pattern design
Hans Eisenmann, Kai Peter, Andrzej J. Strojwas
From the early development phase up to the production phase, test pattern play a key role for microlithography. The requirement for test pattern is to represent the design well and to cover the space of all process conditions, e.g. to investigate the full process window and all other process parameters. This paper shows that the current state-of-the-art test pattern do not address these requirements sufficiently and makes suggestions for a better selection of test pattern. We present a new methodology to analyze an existing layout (e.g. logic library, test pattern or full chip) for critical layout situations which does not need precise process data. We call this method "process space decomposition", because it is aimed at decomposing the process impact to a layout feature into a sum of single independent contributions, the dimensions of the process space. This is a generalization of the classical process window, which examines defocus and exposure dependency of given test pattern, e.g. CD value of dense and isolated lines. In our process space we additionally define the dimensions resist effects, etch effects, mask error and misalignment, which describe the deviation of the printed silicon pattern from its target. We further extend it by the pattern space using a product based layout (library, full chip or synthetic test pattern). The criticality of pattern is defined by their deviation due to aerial image, their sensitivity to the respective dimension or several combinations of these. By exploring the process space for a given design, the method allows to find the most critical patterns independent of specific process parameters. The paper provides examples for different applications of the method: (1) selection of design oriented test pattern for lithography development (2) test pattern reduction in process characterization (3) verification/optimization of printability and performance of post processing procedures (like OPC) (4) creation of a sensitive process monitor.
Technology Modeling, CAD, and Optimization
icon_mobile_dropdown
Technology CAD for integrated circuit fabrication technology development and technology transfer
In this paper systematic simulation-based methodologies for integrated circuit (IC) manufacturing technology development and technology transfer are presented. In technology development, technology computer-aided design (TCAD) tools are used to optimize the device and process parameters to develop a new generation of IC manufacturing technology by reverse engineering from the target product specifications. While in technology transfer to manufacturing co-location, TCAD is used for process centering with respect to high-volume manufacturing equipment of the target manufacturing equipment of the target manufacturing facility. A quantitative model is developed to demonstrate the potential benefits of the simulation-based methodology in reducing the cycle time and cost of typical technology development and technology transfer projects over the traditional practices. The strategy for predictive simulation to improve the effectiveness of a TCAD-based project, is also discussed.
Performance-impact limited-area fill synthesis
Yu Chen, Puneet Gupta, Andrew B. Kahng
Chemical-mechanical planarization (CMP) and other manufacturing steps in very deep submicron VLSI have varying effects on device and interconnect features, depending on the local layout density. To improve manufacturability and performance predictability, area fill features are inserted into the layout to imrpove uniformity with respect to density criteria. However, the performance impact of area fill insertion is not considered by any fill method in the literature. In this paper, we first review and develop estimates for capacitance and timing overhead of area fill insertion. We then give the first formulation of the Performance Impact Limited Fill (PIL-Fill) problem, and describe three practical solution approaches based on Integer Linear Programming (ILP-I and ILP-II) and the Greedy method. We test our methods on two layout test cases obtained from industry. Compared with the normal fill method, our ILP-II method achieves between 25% and 90% reduction in terms of total weighted edge delay (roughly, a measure of sum of node slacks) impact, while maintaining identical quality of the layout density control.
Simulation-based data processing using repeated pattern identification
In typical integrated circuits (IC) designs, the final layout generally contains a lot of repeated patterns. Many of these repetitions are captured by the layout hierarchy. That is, the layout contains many cells that are each repeatedly placed in many locations with different transformation. Effective use of such repetition information in the computation intensive operations such as model-based optical proximity correction (OPC), verification, or contour generation, can lead to significant performance improvement. However, in many other occasions, such repetition information is not directly available. For example, if the layout is flattened, then all the hierarchy that captures the repetition information is lost. Even in hierarchical layout, a cell can contain repeated geometries or patterns. In order for the application to take advantage of such property, a mechanism to efficiently capture such repetition information is necessary. In this paper, we consider the model-based applications that have a unique property, which allows us to find different geometrical patterns that are equivalent in principle for simulation purpose. We introduce a proximity-based pattern identification method which aims at recognizing the maximum amount of repetition in the layout. This method not only captures repeated or symmetric geometries that are present from either the flattening of the hierarchy or within a cell itself, but also finds symmetries within the geometries themselves. The method also finds partial repetitions of geometries that are not completely identical or symmetric. Ideally, these “equivalent” patterns will eventually carry the same processing results with miniscule variations small enough to be ignored for the application. For this reason, it is sufficient to run the computationally expensive model-based operations for one of the pattern and carry the result to the rest of the patterns of the same family. Doing so reduces the problem size as well as the amount of data that requires processing. The total processing time therefore can be dramatically reduced. We demonstrate the method by using OPC as a test example. We show the level of problem size reduction and job run time reduction due to the specific nature of different layouts.
Model-assisted placement of subresolution assist features: experimental results
Lithography models calibrated from experimental data have been used to determine the optimum insertion strategy of sub-resolution assist features in a 130 nm process. This work presents results for 3 different illumination types: Standard, QUASAR, and Annular. The calibrated models are used to classify every edge in the design based on its optical properties (in this case image-log-slope). This classification is used to determine the likelihood of an edge to print on target with the maximum image-log-slope. In other words, the method classifies design edges not in geometrically equivalent classes, but according to equivalent optical responses. After all the edges are classified, a rule table is generated for every process. This table describes the width and separation of the assist features based on a global cost function for each illumination type. The tables are later used to insert the assist features of various widths and separations using pre-defined priority strategies. After the bars have been inserted, OPC is applied to the main structures in the presence of the newly added assist features. Critical areas are tagged for increased fragmentation allowing certain areas to receive the maximum amount of correction and compensate for any proximity effects due to the sub-resolution assist features. The model-assisted solution is compared against a traditional rule-based solution, which was also derived experimentally. Both scenarios have model based OPC correction applied using simulation and experimental data. By comparing both cases it is possible to assess the advantages and disadvantages of both methods.
OPC on real-world circuitry
Sean C. O'Brien, Tom Aton, Mark E. Mason, et al.
In the face of Moore's Law, the lithographic community is finding increasing pressure to do more with less. More, in the sense that lithographers are expected to use an exposure wavelength "lambda" that is shrinking at a slower rate than the critical dimensions (CDs) of devices. This has resulted in the introduction of complicated Resolution Enhancement Technology (RET) schemes. Less, in the sense that the competitive marketplace has resulted in shortened development cycles. These shortened development times mean that lithography and RET teams are often expected to demonstrate "first pass success" with increasing complex lithographic solutions. Unfortunately, first silicon on product prototypes may reveal deficiencies in an OPC infrastrcuture which had been developed using only research and development (R&D) testdie. The primary cause of these deficiencies is that the development and test-structure layouts frequently lack the 2D complexity of real circuitry. OPC models and lithography R&D traditionally compensate well for failures and marginal sites on the simple patterns of R&D testdie. The more complex geometries of real layouts frequently present new challenges. Here, we describe a program initiated at TI to add a complex pattern to the very first test reticle generated for a new technology node. This pattern is auto-generated and includes a random combination of representive circuits at the design rule for that node. OPC is applied to the pattern almost immediately after layout. The distribtion of printed features and marginal sites can then be identified early using simulation. Scanning Electron Microscope (SEM) images of resist and post-etch features can further identify sites requiring changes once reticles are received. We have shown that this early OPC R&D on complex geometries can prevent several OPC revision cycles and enable faster volume yield ramp.
DFM and Information Management
icon_mobile_dropdown
Characterization and modeling of intradie variation and its applications to design for manufacturability
Sharad Saxena, Carlo Guardiani, Michele Quarantelli, et al.
Device scaling increases the impact of within-die variation or mismatch on the performance and yield of many important components of System on Chip (SoC) designs. This has created a need for accurate characterization, modeling, and simulation of mismatch. This paper provides a brief overview of the recent progress in these areas along with an example illustrating the application of these techniques to Design for Manufacturability (DFM) of Ultra Deep Submicron (UDSM) technologies.
Design, Design Objectives, and Validation
icon_mobile_dropdown
Lithography-driven layout of logic cells for 65-nm node
The ITRS roadmap for the 65nm technology node, targets poly gate lengths of 65nm and poly pitches between 140-180nm. In addition, contact overlaps and spacing to diffusion contacts will need to be scaled down. It is very likely that the poly layer will be printed using 193nm high NA steppers and Strong Phase Shift Technologies. Attempts to capture the effect of RET on layout by adding more constraints to the desing rules make it difficult to lay out cells using manual tools and can also lead to sub optimal designs. In this paper we describe a methodology that couples automatic cell generation with Phase shifter insertion and image simulation to allow the design space to be explored more fully.
Improved manufacturability by OPC based on defocus data
Jorg Thiele, Ines Anke, Henning Haffner, et al.
The paper describes the advantages of optical proximity correction (OPC) based on defocus data instead of best focus data. By additionally acepting asymmetric variations of the dimension of different patterns e.g. for an isolated line that can become wider than its nominal width this method can deliver structures much more robust against opens and shorts than in the standard OPC approach which is based on data taken at best process conditions. The differences of both OPC methods are compared based on simulations and checked against experimental data of characteristic IC patterns.
LithoScope: an advanced physical modeling system for mask data verification
The complexity in sub-130 nm mask layout often obscures its correctness and true lithography performance. A cost effective solution to ensure high mask performance in lithography is to apply simulation based mask layout verification. Because mask layout verification serves as a gateway to the expensive manufacturing process, the moel used for verification must have superior accuracy across the process window than models used upstream. In this paper, we demonstrate, for the first time, a software system for mask layout verification and optical proximity correction that employs a full resist development model. The new system, LithoScope, predicts wafer pattern by solving optical and resist processing equations on a scale that is until recently considered unpractical. Leveraging the predictive capability of the physical model, LithoScope can perform mask layout verification and optical proximity correction under a wide range of processing conditions and for any reticle enhancement technology without the need for multiple model development. We discuss hotspot detection, line width variation statistics, and chip level process window prediction using a practical cell layout. We show that LithoScope model can accurately describe the resist-intensive poly gate layer patterning by iso-focal optimization. This system can be used to pre-screen and fix mask data problems before manufacturing to reduce the overall cost of the mask and the product.
Investigation of product design weaknesses using model-based OPC sensitivity analysis
Due to the challenging CD control and resolution requirements of future device generations, a large number of complex lithography enhancement techniques are likely to be used for random logic devices. This increased design, reticle, process and OPC complexity must be handled flawlessly by process engineers in order to create working circuits. Additionally, the rapidly increasing cost and cycletime of advanced reticles has increased the urgency of obtaining reticles devoid of process limiting design or OPC errors. We have extended the capability of leading edge model-based OPC software to find and analyze process-limiting regions in real product designs. Specifically, we have implemented and verified software usefulness to find design-process limitations due to measured lens aberrations, as well as errors in focus, exposure or reticle CD control. We present results showing the applications and limitations of these new model-based analysis methods to discover process-design interaction errors in 90nm and 130nm patterning processes; and to propose design rule, process or OPC improvements to mitigate these errors.
Devices, Layouts, and Patterning
icon_mobile_dropdown
Device characteristics of sub-20-nm silicon nanotransistors
This paper presents a systematic simulation-based study on the design, performance, and scaling issues of sub-20 nm silicon nanotransistors. 3D-process simulation was used to generate silicon FinFET device structures with fin thickness (Tfin) of 10 to 30 nm, fin height (Hfin) of 50 nm, channel length (Lg) of 10 to 50-nm, and gate oxide thickness (Tox(eff)) of 1.5 nm. 3D-device simulation results show that for n-channel FinFETs with Hfin = 50 nm, threshold voltage (Vth) decreases as Lg decreases and Vth roll-of with decreasing Lg is higher for thicker Tfin devices. The simulated drive current (IDSAT) decreases as Tfin decreases for Lg≤ 25 nm while IDSAT increases as Tfin decreases for Lg ≥ 25 nm. It is, also, found that for the devices with Hfin = 50 nm, the simulated subthreshold swing (S) increases as Lg decreases for all devices with 10 nm ≤ Tfin ≤ 30 nm and approaches to 60 mV/decade for Lg≥ 40 nm. Also, S decreases as Tfin decreases for Lg< 40 nm devices. The simulated data for 20 nm nFinFETs with Hfin = 50 nm, Tfin = 10 nm, and TOX(eff) = 1.5 nm show an excellent device performance with Vth ≡ 0.13 V, IDSAT ≡ 775 μA/μm, Ioff ≡ 3 μA/μm, and S ≡ 83 mV/decade. Finally the simulation results for 20 nm nFinFETs and the conventional nMOSFETs were compared. This study, clearly, demonstrates a superior performance and scalability of FinFETs down to near 10 nm regime.
NBTI improvement for pMOS by Cl-contained 1st oxidation in 20A/65A dual-nitrided gate oxide of 0.13-um CMOS technology
Ching-Chen Hao, Min-Hwa Chi, Chao-Chi Chen, et al.
A new method is demonstrated in this paper for improving the NBTI lifetime on pMOS by >3X for I/O (65A) transistors and >2X for core (20A) transistors by using Chlorine (Cl) contained 1st gate oxidation in an advanced dual gate oxide 0.13um CMOS technology. The improvement appears related to the residual SiCl bonds on the surface of core and I/O transistor areas (from the Cl-contained 1st oxidation). The transistor beta (as measured by Idsat/(Vg-Vt)2 at saturation mode) is improved (~10%) on pMOS and degraded slightly (~3%) on nMOS as an evidence for supporting this mechanism.
Library-based process test vehicle design framework
Kelvin Yih-Yuh Doong, L.-J. Hung, Susan Ho, et al.
This work describes a test vehicle design framework, which minimizes the discrepancy among design rule set, tests structure design and testing plan. The framework is composed of the symbolic design rule set, Parametereized-Device, test structure generator, and test vehicle generator. An approach for simplification and consolidation of test structure is proposed to derive the concise test structure library. Finally, implementation of test vehicle is presented.
Design-to-process integration: optimizing 130-nm X architecture manufacturing
Robert Dean, Vinod K. Malhotra, Nahid King, et al.
The X Architecture is a novel on-chip interconnect architecture based on the pervasive use of diagonal wiring. This diagonal wiring reduces total chip wire length by an average 20% and via count by an average of 30%, resulting in simultaneous improvements in chip speed, power, a cost. Thirty percent or greater reduction in via counts is a compelling feature for IC design - but can chips with massive amounts of diagonal wiring be manufactured without some other penalty? This paper presents the result of a project, collaborated by Cadence Design Systems, Numerical Technologies, DuPont Photomasks, and Nikon, aimed at optimizing each step of the lithography supply chain for the Architecture from masks to wafers at 130 nm.
Using the CODE technique to print complex two-dimensional structures in a 90-nm ground rule process
In a previous paper, we have proposed the CODE (Complementary Double Exposure) technique. A new manufacturable Reticle Enhancement Technique (RET) using two binary masks. We have demonstrated the printability of 80nm dense (300nm pitch), semi-dense and isolated lines using the CODE technique and showed good printing results using a 0.63NA ArF scanner. In a more recent article we described all the steps required to develop the CODE application: the binary decomposition and the solutions developed in order to compensate adequately for line end shortening. This study was done based on aerial image simulations only. In this paper, we will give experimental results for printing complex two-dimensional structures for the high performance version of a 90nm ground rule, 240nm minimal pitch process, using the CODE technique. The results of depth of focus (DOF), energy latitude (EL) and mask error enhancement factor (MEEF) through pitch, and end-cap correction will be discussed, for quadrupole and annular illumination using a 193nm 0.70NA exposure tool. The CODE technique, not only because of a lower cost but also because of its performance, could be a good alternative to the alternating PSM technique, having less design penalties and a better mask making cycle time.
DFM and Information Management
icon_mobile_dropdown
New stream format: progress report on containing data size explosion
The data volumes of individual files used in the manufacture of modern integrated circuits have become unmanageable using existing data formats specifications. The ITRS roadmap indicates that single layer MEBES files in 2002 reached the 50 GB range, worst case. Under the sponsorship of SEMI, a working group was formed to create a new format for use in describing integrated circuit layouts in a more efficient and extendible manner. This paper is a report on the status and potential benefits the new format can deliver.
Optimization of the data preparation for variable-shaped beam mask writing machines
As the industry enters the development of the 65nm node the presssure on the data path and atepout flow is growing. Design complexity and increased deployment of RET resut in rapidly growing file sizes, which turned the commodity of mask data preparation into a real bottleneck. Mask manufacturing starting with the 130nm nodes is accompanied by an increasing depoloyment of variable shaped beam (VSB) mask writing machines. This transition requires the adaptation of the established data preparation path to these circumstances. Historicially data has been presented mostly in MEBES or similar intermediate formats to the mask houses. Reformatting these data is a redundant operation, which in addition is not very efficient given the constraints of the intermediate formats. An alternate data preparation flow accommodating the larger files and re-gaining flexibility for TAT and throughput management downstream is suggested. This flow utilizes the hierarchicial gds-format as the exchange format in the mask data preparation. The introduction of a hierarchical exchange format enables the transfer of a number of necessary data preparation steps into the hierarchical domain. The paper illustrates the benefit of hierarchical processing based on gds-files with experimental data on file size reduction and TAT improvement for direct format conversions vs. re-fracturing as well as other processing steps. In contrast to raster scan mask making equipment, in a variable shaped beam mask writing machine the writing time and the ability to meet tight mask specification is affected by data preparation. Most critical are the control of the total shot count, file size and the efficient suppression of small figures. The paper will discuss these performance parameters and illustrate the desired practices.
Compression algorithms for dummy-fill VLSI layout data
Robert B. Ellis, Andrew B. Kahng, Yuhong Zheng
Dummy fill is introduced into sparse regions of a VLSI layout to equalize the spatial density of the layout, improving uniformity of chemical-mechanical planarization (CMP). It is now well-known that dummy fill insertion for CMP uniformity changes the back-end flow with respect to layout, parasitic extraction and performance analysis. Of equal import is dummy fill's impact on layout data volume and the manufacturing handoff. For future mask and foundry flows, as well as potential maskless (direct-write) applications, dummy fill layout data must be compressed at factors of 25 or greater. In this work, we propose and assess a number of lossless and lossy compression algorithms for dummy fill. Our methods are based on the building blocks of JBIG approaches - arithmetic coding, soft pattern matching, pattern matching and substitution, etc. We observe that the fill compression problem has a unique "one-sided" characteristic; we propose a technique of achieving one-sided loss by solving an asymmetric cover problem that is of independent interest. Our methods achieve substantial improvements over commercial binary image compression tools especially as fill data size becomes large.
Image Quality and Design Rules
icon_mobile_dropdown
Precision control of poly-gate CD by local OPC for elimination of microloading effect on 0.13-um CMOS technology
Tzy-Kuang Lee, Yao-Ching Wang, Min-hwa Chi, et al.
The yield impact by local non-uniformity of poly-gate CD, edge profile, and transistor performance (between larger pitch area and minimum pitch area) is no longer tolerable in advanced CMOS technology as illustrated in a 2M SRAM vehicle processed by 0.13um flow in this paper. Micro-loading effects shall be minimized for process steps in poly-gate loop (including poly patterning, hard-mask etching, photo-resist (PR) ashing, poly etching, hard-mask removal, wet clean, etc), so that the accumulated local non-uniformity can be minimized. Also additional OPC may be applied locally (on mask) to compensate the remaining local non-uniformity. Significantly higher yield of a vehicle (2M SRAM) is demonstrated by efforts from both minimizing micro-loading effect in process steps as well as applying additional local OPC.
Poster Session
icon_mobile_dropdown
Creation and verification of phase-compliant SoC IP for the fabless COT designers
Vinod K. Malhotra, Nahid King, Raymond Leung, et al.
As the semiconductor industry has began production of subwavelength geometries, technologies such as Optical Proximity Correction (OPC) and Phase-Shifting Masks (PSM) have become requirements in producing integrated circuits. One of these approaches, Alternating PSM (AltPSM), has been adopted by leading edge semiconductor companies to meet IC manufacturing production requirements. As part of a complete production flow for these processes, it is required for SOC IP to be "phase compliant". Only through the phase compliance, the fabless COT semiconductor market is enabled to leverage the benefits of subwavelength geometries. This paper introduces the concept of phase compliance, and the importance of guaranteeing correct phase topology and phase compliance of layouts for AltPSM. It further proposes a method to create phase compliant SoC IP, and a process of verifying that SoC IP is phase compliance. The timing characterizaitn data is also included to show that the performance speed of the memory layouts was enhanced by 20% over regular 0.13 micron proces. The paper concludes with some general remarks on how this methodolgy will be impacted as we move to 65nm node.
Statistical data assessment for optimization of the data preparation and manufacturing
Progressing integration and system-on-chip approaches increase the complexity of advznce designs. Data preparation, mask and wafer manufacturing have to cope with these designs while achieving high throughput and tight specifications. One of the biggest variables in a production mask processing flow is the actual design being produced. Layout variability can invalidate process settings by introducing conditions outside of the range the process is calibrated for. Characterization of how parameters such as density distributions, CD distributions, minimium, and maximum CD impact yield will no doubt remain proprietary. However, the ability to characterize a layout by these geometric parameters as well as lithographic parameters is a common need. Gathering this knowledge prior to the processing can contribute significantly to the efficiency of applying process recipes once the correlation has been made. The capabilities of a statistical layout analysis are demonstrated and practical applications in mask data preparation and manufacturing are discussed.
Lithographic tradeoffs between different assist feature OPC design strategies
James C. Word, Siuhua Zhu
In recent years many of the problems associated with the use of assist features have been partially or completely resolved. Such issues include mask manufacturing and inspection, software maturity, and the so-called forbidden pitch problem. Still, the lithographer is faced with numerous choices in developing production worthy assist feature designs. This paper will examine some of the choices, and the tradeoffs associated with each. In particular the choice between simple 1D scatter bar designs and various 2D designs will be explored to determine the tradeoffs with lithographic performance. A DRC (Design Rule Check)-driven technique has been developed to highlight potential shortcomings of each individual design strategy. The lithographic impact of these shortcomings has been confirmed with on-Silicon process data.
Resolution enhancement technology requirements for 65-nm node
In this paper, we evaluate various strong and weak resolution enhancement techniques in the context of 65nm technology node requirements. Specifically, we concentrate on a simulation-based performance comparison of the dark-field alternating aperture and chrome-less shifter-shutter phase shifting masks (AAPSM and CLM respectively) for imaging of the critical gate level. Along with the through-pitch aerial image quality, the mask error enhancement factor, proximity effects, and the overall process latitudes are compared. Results show that while there might be multiple approaches in 193nm lithography to pattern isolated and semi-isolated pitches, it is necessary to utilize strong resolution enhancement in order to resovle dense pitches and achieve a sufficient common process performance with required CD control for the 65nm node.
PsmLint: bringing AltPSM benefits to the IC design stage
Pradiptya Ghosh, Chung-Shin Kang, Michael Sanie, et al.
As we delve deeper into subwavelength design and manufacturing challenges and solutions, technologies such as Optical Proximity Correction (OPC) and Phase Shifting Masks (PSM) have become essential to reliabily produce advanced integrated circuits. Alternating PSM (altPSM) has demonstrated many recent successses as an effective means to this end. This paper lays the groundwork for defining the IC design components needed to meet altPSM-compliance requirements. The paper addresses the open question regarding whether we can take into account all the manufacturing requirements and come up with highly abstract manufacturing rules that can be applied to all IC design domains. The paper further proposes a solution with specific rules and algorithms needed to apply altPSM to transistor gate regions, and targeted to various domains of IC design such as verification or place and route. Examples include constraints for routers and placement tools, as well as sign-off rules that can be used by designers as well as by production engineres to fine-tune the process and yield for a given design structure. The usability of such a solution is then analyzed to take the practical aspects of IC design into consideration.
Optimizing manufacturability for the 65-nm process node
The 65nm technology node will require a more detailed assessment of the tradeoffs between performance, manufacturability and cost than any previous generation of technology. Circuits fabricated at the 65nm technology node need to use Strong Phase shifting techniques such as Full-Phase and Model based OPC in order to guarantee printability of critical layers, such as the poly layer. We presents a methodology whereby layouts are genrated base don a preliminary set of design rules for 65nm and the process latitude determined using image simulation software. Mask costs were also estiamted base donfigure counts of the required masks. Tradeoffs between mask costs, manufacturibility and density were made by small changes to the design rules. The simultaneous use of tools that integrate the design creation process with mask generation allows far better optimization than current methodology where physical design is separated from the downstream data preparation and processing.
Technology Modeling, CAD, and Optimization
icon_mobile_dropdown
Physical and timing verification of subwavelength-scale designs: I. Lithography impact on MOSFETs
Robert C. Pack, Valery Axelrad, Andrei Shibkov, et al.
Subwavelength lithography at low contrast, or low-k1 factor, leads to new requirements for design, design analysis, and design verification techniques. These techniques must account for inherent physical circuit feature distortions resulting from layout pattern-dependent design-to-silicon patterning processes in this era. These distortions are unavoidable, even in the presence of sophisticated Resolution Enhancement Technologies (RET), and are a 'fact-of-life’ for the designer implementing nanometer-scale designs for the foreseeable low-k1 future. The consequence is that fabricated silicon feature shapes and dimensions are in general printed with far less fidelity in comparison to the designer’s desired layout than in past generations and that the designer must consider design within significantly different margins of geometry tolerance. Traditional (Mead-Conway originated) WYSIWYG (what you see is what you get) design methodologies, assume that the designer’s physical circuit element shapes are accurate in comparison to the corresponding shapes on the real fabricated IC, and uses design rules to verify satisfactory fabrication compliance, as the input for both interconnect parasitic loading calculations and to transistor models used for performance simulation. However, these assumptions are increasingly poor ones as k1 decreases to unprecidented levels -- with concomitant increase in patterned feature distortion and fabrication yield failure modes. This paper explores a new paradigm for nanometer-scale design, one in which more advanced models of critical low-k1 lithographic printing effects are incorporated into the design flow to improve upon yield and performance verification accuracy. We start with an analysis of a complex 32-bit adder block circuit design to determine systematic changes in gate length, width and shape variations for each MOSFET in the circuit due to optical proximity effects. The physical gate dimensions for all, as predicted by the simulations, are then incorporated into the circuit simulation models and netlist (schematic) and are used to calculate the changes in critical parametric yield factors such as timing and power consumption in the circuit behavior. These functional consequences create a manufacturability tolerance requirement that relates to function and parametric yield, not just physical manufacturability. We then explore the improvements in functional attributes and manufacturability that arise from systematic correction of these distortions by RET including; simulation-driven model-based OPC, alternating-aperture PSM (altPSM), and altPSM+OPC. This analysis is just one dimension of a systmatic methodology that incorporates lithographic effects into a design for manufacturing (DFM) scheme. The benefits promise dramatically improved silicon-signoff verification, predictive performance and yield analysis, and more cost-effective application of RET.
Poster Session
icon_mobile_dropdown
Optimized cobalt silicide formation through etch process improvements
David S. Tucker, Richard Yang, Heather Maines
Cobalt Silicide (CoSi) is used to reduce contact resistance for sub-micron technology such as 0.25μm CMOS. At National Semiconductor, a thin oxide is used to protect areas that are not be silicided. The selective removal of this oxide to form the silicide mask is critical as it occurs in the highly sensitive cobalt silicide module where neither over nor under etch is acceptable. The final etch uses a low power, CHF2/Ar recipe with good across wafer uniformity. In keeping with the advance dprocess control methodgoy at National, the etch is end pointed to reduce wafer-to-wafer and etch chamber variability. This paper contains integation aspects of the cobalt silicide module as well as the specifics of the new etch process. The effects of power, pressure and gas flows and details of the end point set up are reviewed, as wel as cross section analysis and electrical responses.
Design, Design Objectives, and Validation
icon_mobile_dropdown
Effective multicutline QUASAR illumination optimization for SRAM and logic
Lithographers face many hurdles to achieve the ever-shrinking process design rules (PDRs). Proximity effects are becoming more and more an issue requiring model-based Optical Proximity Correction (OPC), sub-resolution assist features, and properly tuned illumination settings in order to minimize these effects while providing enough contrast to maintain a viable process window. For any type of OPC application to be successful, a fundamental illumination optimization must first be completed. Unfortunately, the once trivial illumination optimization has evolved into a major task for ASIC houses that require a manufacturable process window for isolated logic structures as well as dense SRAM features. Since these features commonly appear on the same reticle, today’s illumination optimization must look at “common” process windows for multiple cutlines that include a variety of different feature types and pitches. This is a daunting task for the current single feature simulators and requires a considerable amount of simulation time, engineering time, and fab confirmation data in order to come up with an optimum illumination setting for such a wide variety of features. An internal Illumination Optimization (ILO) application has greatly simplified this process by allowing the user to optimize an illumination setting by simultaneously maximizing the “combined” DOF (depth of focus) over multiple cutlines (simulation sites). Cutlines can be placed on a variety of structures in an actual design as well as several key pitches. Any number of the cutlines can be constrained to the gds drawn CD (critical dimension) while others can be allowed to “float” with pseudo OPC allowing the co-optimization of the illumination setting for any OPC that may be applied in the final design. The automated illumination optimization is then run using a tuned model. Output data is a suggested illumination setting with supporting data used to formulate the recommendation. This paper will present the multi-cutline ILO process and compare it with the work involved to do the same optimization using a single feature simulator. Examples will be shown where multi-cutline ILO was able to resolve hard annular aberrations while maintaining the DOF.
Poster Session
icon_mobile_dropdown
Modification of existing chip layout for yield and reliability improvement by computer-aided design tools
Mu-Jing Li, Suryanarayana Maturi, Pankaj Dixit
A CAD flow has been developed to modify an existing large scale chip layout to reinforce the redundant via design rules to improve the yield and reliability. The flow operates on each metal-via pair from bottom up to correct the redundant via rule violations. It divides a large complex design into cells, so that multiple process can work concurrently as if every process were working on the top level to reach the goal in a reasonable time.
In-Depth Seminar
icon_mobile_dropdown
Applications of image diagnostics to metrology quality assurance and process control
The purpose of this paper is to define standard methods for effective and efficient image-based dimensional metrology for microlithography applications in the manufacture of integrated circuits. This paper represents a consensual view of the co-authors, not necessarily in total agreement across all subjects, but in complete agreement on the fundamentals of dimensional metrology in this application. Fundamental expectations in the conventional comparison-based metrology of width are reviewed, with its reliance on calibration and standards, and how it is different from metrology of pitch and image placement. We discuss the wealth of a priori information in an image of a feature on a mask or a wafer. We define the estimates of deviations from these expectations and their applications to effective detection and identification of the measurement errors attributable to the measurement procedure or the metrology tool, as well as to the sample and the process o fits manufacture. Although many individuals and organizations already use such efficient methods, industry-wide standard methods do not exist today. This group of professionals expects that, by placing de facto standard meth-odologies into public domain, we can help reduce waste and risks inherent in a "spontaneous" technology build-out, thereby enabling a seamless proliferation of these methods by equipment vendors and users of dimensional metrology. Progress in this key technology, with the new dimensional metrology capabilities enabled, leads to improved perform-ance and yield of IC products, as well as increased automation and manufacturing efficiency, ensuring the long-term health of our industry.
Design, Design Objectives, and Validation
icon_mobile_dropdown
OPC methods to improve image slope and process window
In this paper, we use the gradient of the image slope and gradient of the edge placement error (EPE) in order to improve both slope and EPE during OPC. The EPE gradient taken with respect to edge position is normally called MEEF or more generally, the MEEF matrix. Use of the gradient of image slope with respect to change in edge position is a relatively new concept, introduced by Granik as the “contrast matrix”. Whereas traditional OPC techniques focus on EPE alone (pattern fidelity), we broaden the scope of OPC to maximize slope for improved image robustness and to maximize process window.