Application of statistical metrology to reduce total uncertainty in the CD SEM measurement of across-chip linewidth variation
Author(s):
Kevin M. Monahan;
Randy A. Forcier;
Waiman Ng;
Suresh Kudallur;
Harry Sewell;
Herschel M. Marchman;
Jerry E. Schlesinger
Show Abstract
Statistical metrology can be defined as a set of procedures to remove systematic and random gauge error from confounded measurement data for the purpose of reducing total uncertainty. We have applied these procedures to the determination of across-chip linewidth variation, a critical statistic in determining the speed binning and average selling price of advanced microprocessors, digital signal processors, and high-performance memory devices. The measurement data was obtained from tow sources: a high- throughput CD-SEM and an atomic force microscope. We found that the high-throughput of SEM permitted the additional measurements required for statistical metrology and heterogeneous gauge matching.
Fourier transform feedback tool for scanning electron microscopes used in semiconductor metrology
Author(s):
Michael T. Postek Jr.;
Andras E. Vladar;
Mark P. Davidson
Show Abstract
The utility of the sharpness concept for use on metrology scanning electron microscopes (SEM) as implemented through the Fourier transform technique has been clearly demonstrated and documented. The original methods for sharpness analysis were labor-intensive and therefore not suited to industrial applications like semiconductor integrated circuit production. We have integrated these techniques into an easy to use and stand alone software package called SEM monitor which makes the analysis easy and moreover can serve as a prototype for integration into a production tool This paper will present the general philosophy of the system and analytical data taken form both laboratory and production instruments to prove both the relevance of the sharpness concept and the usefulness of this tool for SEM performance monitoring.
Survey of scanning electron microscopes using quantitative resolution evaluation
Author(s):
Gilles L. Fanget;
Herve M. Martin;
Brigitte Florin
Show Abstract
Critical dimension scanning electron microscopes (CD SEM) resolution impacts on the quality of measurements. Previous works have proposed two dimensions fast Fourier transform image analysis, to get an operator free evaluation of the resolution. This method combined with a well suited sample leads to a powerful way to monitor the SEMs. A daily use enables us to quantify the effect of parameters that were previously hidden. It detects the need for service and is a non biased test for the equipment.
Statistical verification of multiple CD-SEM matching
Author(s):
Doreen Erickson;
Neal T. Sullivan;
Richard C. Elliott
Show Abstract
A method for monitoring and improving CD SEM system matching performance using Duncan's Multiple Range Test is presented using results obtained from KLA 8000 systems. The demonstrated benefits of this method include: eliminating the need for a 'mother' system to which al others are matched; providing the capability to analyze a large number of systems simultaneously; identifying poorly performing systems; and providing the statistical significance of the result. Sample plan considerations are discussed and methods to minimize the effect of both sample degradation and undesired sources of variation are presented. A graphical method for analyzing the output of Duncan's Multiple Range Test is developed and applied to process control. Use of a fileserver to ensure job recipe consistency across tools, and enable the elimination of slope and offset correction factors, lay the groundwork for improvements in system monitoring capability. Improvements of greater than a factor of two in system matching are demonstrated and long term CD SEM matching of less than 10 nm on average across critical process layers is achieved.
Effect of processing on the overlay performance of a wafer stepper
Author(s):
Peter Dirksen;
Casper A. H. Juffermans;
A. Leeuwestein;
Kees A. H. Mutsaers;
Tom A. M. Nuijs;
Rudy J. M. Pellens;
Robert Wolters;
Jack Gemen
Show Abstract
The effects of resist spinning, aluminum sputtering and chemical mechanical polishing on the observed alignment position in ASML wafer steppers are presented. Vector maps of the process induced alignment shifts are shown for various processing conditions. The deposition experiments are compared with simulations and a specially designed alignment system modeling program.
Improving metrology signal-to-noise on grainy overlay features
Author(s):
Arnold W. Yanof;
Woody Windsor;
Russ Elias;
John N. Helbert;
Cameron Harker
Show Abstract
High temperature metal deposition produces large grain size and a highly visible surface morphology due to grain boundaries. When an interconnect layer photoresist pattern is aligned, grainy metal results in noisy signals from optical metrology equipment. The overlay metrology tool hardware and software configuration and target design must be optimized to obtain the best possible signal-to-noise. A powerful metric is developed herein to single out the noise component due to the overlay target image distortions. This methodology is suitable to a production environment. A variety of techniques based upon the target noise metric, including designed experiments, are employed to optimize the overlay measurements configuration.
Improving the accuracy of overlay measurements through reduction in tool- and wafer-induced shifts
Author(s):
Moshe E. Preil;
Bert F. Plambeck;
Yoram Uziel;
Hao Zhou;
Matthew W. Melvin
Show Abstract
The accuracy of overlay measurements is negatively impacted by asymmetries in the wafer targets and in the metrology system optics. These asymmetries lead to spurious shifts in the registration data which are referred to as tool induced shift (TIS) and wafer induced shift (WIS). In practice, there is always some interaction between the optics and the wafer, making it difficult to separate the errors into specific TIS and WIS components. As a result, the tool and wafer induced errors are usually confounded together and simply referred to as TIS. Overlay metrology systems typically attempt to quantify the TIS by measuring sample wafers at 0 and 180 degree orientations. The observed mean difference between measurements taken in these orientations is taken as an estimate of the TIS error. These TIS calibration values are often applied to all subsequent wafers of the same type, with the assumptions that (1) the tool contribution remains constant and (2) all wafers of the same type have identical asymmetries, and therefore identical contributions to TIS errors. In most case, the TIS error is further assumed to be constant across the entire wafer, even though processes such as chemical mechanical polishing are known to induce asymmetries which vary systematically across the wafer. One method to reduce measurement uncertainty would be to calibrate for TIS on each wafer and at every measurement site. This solution would drastically reduce the throughput of existing overlay metrology tools. We present two potential solutions to this problem, one involving modified system software, the other utilizing a unique new measurement methodology. The impact on throughput and improvement in overlay accuracy for each approach will be discussed, and data will be presented showing the advantages and drawbacks of each technique.
Improvement of alignment accuracy for scaled exposure field
Author(s):
Satoshi Nakajima;
Makoto Tanigawa;
Akira Ishihama;
Keizo Sakiyama
Show Abstract
As the reduction of design rules, improvement of the overlay accuracy for intra exposure field comes to b important. We traced the transition of the exposure field size for each process steps, and found that the LOCOS oxidation has fairly large effect on the expansion of exposure field. For the range of 400-1000nm of LOCOS oxidation thickness, the expansion ratio was about 3.8-4.2 ppm. This expansion ratio is proportional to the oxidation thickness within that range. At the length of a side of 20mm exposure field, this value is equal to 0.08micrometers . So the overlay error at the corners of exposure field is amount to 0.04micrometers . This amount is not negligible for the quarter micron design rule. For the purpose of improving the overlay accuracy for extended field which is caused by LOCOS oxidation, we investigated the method of reducing the projective magnification for the isolation layer. The overlay accuracy with the reduction of -3 ppm shows the 0.065 micrometers as the 3 sigma value, and is improved to the half of control wafer with the reduction of 0 ppm.
Method to characterize overlay tool misalignments and distortions
Author(s):
Richard M. Silver;
James E. Potzick;
Fredric Scire;
Christopher J. Evans;
M. McGlauflin;
Edward Kornegay;
Robert D. Larrabee
Show Abstract
A new optical alignment artifact under development at NIST is described. This artifact, referred to as a stepped microcone, is designed to assist users and manufacturers of overlay metrology tools in the reduction of tool-induced measurement errors. We outline the design criteria and diamond turning lathe techniques used for manufacturing this structure. The alignment methods using this artifact allow the separation of error components associated with the optical system or the mechanical positioning systems as encountered when performing measurements in different focal planes.Although some difficulties have been encountered when performing measurements in different focal planes. Although some difficulties have been encountered in the actual diamond turning process,the data presented show some improvements with the more recent prototypes which indicate that this method of fabrication will be useful. Photometer scan data and CCD image acquisition hardware show a significant optical response at the step edges from these structures. Initial analysis of the optical response of these edges shows sensitivity to the material used and the details of the manufacturing processes.
Thinking small: challenges for metrology at century's end
Author(s):
William H. Arnold
Show Abstract
This paper will consider the challenges facing linewidth metrology as devices shrink to the 100nm level and below, forcing all of us to 'thick small'. Significant improvements are needed in low voltage SEM resolution and measurement reproducibility. The applications of electrical probe metrology should be expanded through clever construction of test devices. Atomic force microscopy offers a novel way to measure feature size, as well as wall profiles and material thicknesses, but suffers from slow scan rates and data acquisition cycles. Advances in AFM need to address more rapid CD measurements and real-time imaging.
Effect of reticle bias on isofocal process performance at subhalfmicron resolution
Author(s):
Brian Martin;
Graham G. Arthur
Show Abstract
This paper investigates, by computer simulation, the effect of reticle bias on the isofocal behavior displayed by a range of feature types down to 0.4 micron resolution. All simulations are in the developed resist image and exploit resist and development parameters of a contemporary i-line positive resist. Results show that, with the exception of isolated lines, the application of reticle bias is ineffective in achieving a corresponding shift in the isofocal dimension.
Simulation of subhalfmicron-mask defect printability at 1X reticle magnification
Author(s):
Warren W. Flack;
Gary Newman;
Dan L. Schurz
Show Abstract
There has been considerable attention given to the printability of reticle defects and their impact on wafer yields. Over the last year the printability risk from small defects increased due to the wider application of optical proximity correction structures and the inclusion of more phase shifting retictles. There have been several simulation studies on the printability of sub-halfmicron defects using lens and illumination parameters of 5X reduction steppers. Since submicron 1X projection systems are being incorporated into numerous fabricant lines, there is a clear need to determine if these system show similar sensitivity to sub- halfmicron defects as reduction steppers. Earlier experimental work examined the printability of several classes of sub-halfmicron 25 micrometers defects on a submicron 1X stepper. To extend this work, a 3D optical lithography simulation tool has been employed to predict the printablity of various reticle defect scenarios. Experimental data was used to validate the 3D simulator by comparing modeling data to SEM measurements of wafers exposed with a reticle containing programmed clear pinhole and opaque pindot defects. A statistically designed simulation study was performed to quantify the critical dimension variation resulting from defects of varying size, proximity to a feature edge and variation in the pitch of the impacted line/space features. An additional statistically designed simulation was then use to predict the printability behavior of defects relative to different features sizes over a range of numerical aperture and partial coherence settings applicable to a 1X lens design. Finally, the impact of defect length and width on printability were characterized for rectangular defects over a range of sizes. Overall, this analysis enhances the understanding of the relationship between reticle defects and 1X projection optics and allows for determination of optical reticle defect specifications for cost effective lithography applications.
Simulating photomask edge roughness and corner rounding
Author(s):
Konstantinos Adam;
Robert John Socha;
Thomas V. Pistor;
Andrew R. Neureuther
Show Abstract
Corner rounding and edge roughness of a rectangular opening at a glass-chrome mask are simulated with TEMPEST. The intensity patterns on the image plane are extracted and compared for these defects at several degrees of fabrication-induced imperfection. A 4X - DUV lithography printing system is assumed with NA equals 0.6 and (sigma) equals 0.5. The prototypical geometry simulate was a 4 micrometers X 1 micrometers line on the mask. The results indicate that the rounding of the corners does not decrease the printed area by more than 2 percent for a 0.4 micrometers radius corner rounding and that roughness should not be a concern, at least in DUV, since it does not crucially affect the linewidth of the printed area.
Investigation of the effects of charging in SEM-based CD metrology
Author(s):
Mark P. Davidson;
Neal T. Sullivan
Show Abstract
Scanning electron microscopes are considered the most likely tool for future CD metrology down to 0.1 micron linewidths and below. Charging effects on insulating materials are a long standing problem for electron microscopes. The shrinking design rules are making the measurement errors caused by charging more significant. In this paper a model is proposed which incorporates charging effects into a Monte Carlo simulation model. The model stems from the notion of beam induced conductivity, an established phenomenon whereby an insulator becomes conducting for a brief period of time after being hit by a primary electron. The insulator becomes conducting only within the interaction volume of the primary electron. So after multiple scans of the primary beam has occurred, it can be expected that because of the transient beam induced conductivity that the resulting charge distribution will be such a to create an equipotential surface where significant primary beam dose has occurred. This concept is applied to resist by treating the top region of the resist as a negatively charged potentials. The substrate is given a different potential In general different materials can be expected to have different potentials. One important consequence is that the corners of the resist line, if they are sharp, have strong electric fields and they repel the beam electron. We calculate the electrostatic fields given the resist geometry,then we calculate the beam deflection caused by this field, we remap Monte Carlo simulation data to fold in this effect, and finally we compare with some experimental data to see if this charging effect can account for the apparent resolution degradation that occurs at the edges of resist lines with scanning electron microscopes.
High-precision calibration of a scanning probe microscope (SPM) for pitch and overlay measurements
Author(s):
Donald A. Chernoff;
Jason D. Lohr;
Douglas P. Hansen;
Michael Lines
Show Abstract
A general purpose SPM can function as a metrology SPM when used with a new type of calibration standard and new data analysis software. The calibration standard is a 288-nm pitch, 1D holographic grating. The holographic exposure process assures uniform feature spacing over the entire specimen area, with an expected accuracy of 0.1 percent. We developed new software for data analysis and used it to diagnose and correct the residual scan nonlinearity of a standard NanoScope SPM. We improved the differential non- linearity of a 10 micron scan from 6.7 percent to 1.1 percent and we improved the integral non-linearity from 0.5 percent to 0.04 percent. We then applied the improved instrument to gauge feature spacing son magnetic disks, integrated circuits, and optical disks.
Dimensional metrology at the nanometer level: combined SEM and PPM
Author(s):
Michael T. Postek Jr.;
Huddee J. Ho;
Harrison L. Weese
Show Abstract
The National Institute of Standards and Technology (NIST) is currently exploring the potential afforded by the incorporation of a commercial proximal probe microscope operating in the scanning tunneling or atomic force mode into a high resolution field emission scanning electron microscope (SEM). This instrument will be used in the development of NIST traceable standards for dimensional metrology at the nanometer level. The combination of the tow microscopic techniques provides: high precision probe placement, the capability of measuring and monitoring the probe geometry, monitoring the scanning of the probe across the feature of interest and an ability for comparative microscopy. The integration of the commercial instrument is the first step in the development of a custom NIST integrated SEM/SxM metrology instrument. This paper presents early results regarding the integration of the two instruments and the application of these instruments to the development of SRM 2090 and the SEM sharpness standard.
Scatterometric process monitor for silylation
Author(s):
Shoaib H. Zaidi;
John Robert McNeil;
S. Sohail H. Naqvi
Show Abstract
The silylation step in the top surface imaging process has been difficult to monitor and characterize for lack of appropriate metrology tools. Utilizing scatterometry to measure silylated wafers, we report successful monitoring of processing effects. Wafers were manufactured under nominally identical processing conditions. Applying scatterometry, we are able to discern location dependent variations within wafers. In addition, wafer to wafer variations are also observe. Both these variations are detrimental to yield. Variations in processing conditions cause modifications and perturbations in the gratings. Different gratings diffract light in a dissimilar manner. Processing conditions and their effects on the wafers are deduced from these measurements using computational analysis. This information is used to detect unwanted variations in processing conditions so that corrective responses can be implemented. This technique is rapid, non-destructive and sensitive to changes introduced by the silylation process.
Importance of wafer flatness for CMP and lithography
Author(s):
Yuan Zhang;
Lucian Wagner;
Peter Golbutsov
Show Abstract
Chemical mechanical planarization (CMP) is aimed at planarizing wafer surfaces in order to meet the tightening depth-of-focus requirements for advanced lithography. A simple method will be introduced which uses the site flatness requirement from the 1994 National Technology Roadmap for Semiconductors as a criterion to qualify post- CMP wafer flatness. Wafer dimensional data measured on a capacitance gauge were converted into local flatness with different site sizes according to the roadmap. The resulting site flatness was then subtracted from the required flatness threshold. The results suggest that current CMP technology improves wafer flatness from a 0.35 micrometers technology point of view. As the design rules shrink, however, more than half of the sites do not meet the 0.25 micrometers lithographic requirements even though there are flatness improvements due to CMP. Thus, much flatter wafers and more effective planarization technologies are needed to meet the challenges of next device generations.
Plasma antireflective coating optimization using enhanced reflectivity modeling
Author(s):
Kevin D. Lucas;
Jamie A. Vasquez;
Ajay Jain;
Stanley M. Filipiak;
Tam Vuong;
Charles Fredrick King;
Bernard J. Roman
Show Abstract
An improved method is presented for the optimization of plasma deposited bottom inorganic anti-reflective coatings (ARCs). These ARCs have shown the capability to improve photolithography profess margins through reduction of substrate reflectivity while meeting integration issues. However, the ability to vary plasma ARC optical properties through deposition conditions has led to increased complexity of film stack optimization. We present simple but effective enhanced modeling methods for reducing the effort required to properly tune plasma ARC optical conditions and optimize complex films tacks incorporating these materials.
Characterization of real particle size for the process particle monitor using laser surface scanners
Author(s):
Yoko Miyazaki;
Toshiaki Mugibayashi;
Masahiko Ikeno
Show Abstract
To prevent the particle generation in a process and to reduce the number of killer size particles, it is obviously important to get the information of exact particle size. The most popular tool for the process particle monitor is a laser surface scanner, whose detection sensitivity is defined by calibration using a PSL standard particles. We will report the difference between the real size of particles on process monitor wafers an the measured size by the laser surface scanner using a PSL calibration curve. For example, the real particle size is 14 micrometers but a machine A displays 164 micrometers and a machine B displays 4 micrometers on an AlCu-deposited wafer. The machine A displayed size is smaller than our measured size, in the range 0.1 to 10 micrometers . On the other side, beyond 10 micrometers size is displayed larger. To identify the source of particles, and to reduce and keep the number of killer size particles below the upper control limit, a laser surface scanner should display more accurate particle size which can be seen on the deposition layer.
Characterization of defect detection schemes using rigorous 3D EM field simulation
Author(s):
Aaron L. Swecker;
Andrzej J. Strojwas;
Ady Levy;
Bobby R. Bell
Show Abstract
Through the use of the physically based electromagnetic field simulator (METRO), various defect detection schemes are investigated for the post CMP inspection application. In particular, detection o the filed tungsten micro-score or residual, a critical CMP process defect, is evaluated for both open and densely patterned areas. Using METRO, high resolution bright field is shown to have superior sensitivity to lower resolution bright field. High resolution bright field also demonstrates the ability to size the defect. Several illumination bandwidths and optical resolutions are studied in the presence of CMP color noise or oxide thickness variation.Ultra broad band bright field detection is shown to have reduced color noise compared to traditional bright field detection. Also, higher resolution optics shows lower color noise than lower resolution optics. Experimental results are presented that illustrate the optical bandwidth enhancements identified through these simulations.
Application of rigorous topography simulation for modeling of defect propagation/growth in VLSI fabrication
Author(s):
Xiaolei Li;
Mahesh Reddy;
Andrzej J. Strojwas;
Linda Milor;
YungTao Lin
Show Abstract
Particulate contamination deposited on silicon wafers is typically the dominant reason for yield loss in VLSI manufacturing. The transformation of contaminating particles into defects and then electrical faults is a very complex process which depends on the effect location, size, material and the underlying IC topography. A rigorous 2D topography simulator based on the photolithography simulator METROPOLE, has been developed to allow the prediction and correlation of the critical physical parameters of contamination in the manufacturing process to device defects. The results of a large number of defect samples simulated using the above approach were compared with data gathered from the AMD- Sunnyvale fabline. A good match was obtained indicating the accuracy of this method which provided a framework for developing contamination to defect propagation/growth macromodels.
Optimizing inline defect monitoring using correlation with electrical failures
Author(s):
Prashant A. Aji;
Arnaud Lanier
Show Abstract
This paper describes the work done for optimization of product wafer inline monitoring using the KLA 2132 and Tencor 7700 at the SGS Thomson Rousset facility using electrical bitmapping as response. Emphasis was placed on understanding each system's capability and limitation with regards to detecting 'killer defects', as applied to different process steps. In addition speed of detection as well as signal to noise ratio were used as criteria for selecting the monitoring equipment for certain critical process steps. This experiment was carried out using a high volume product with a ROM code which made up 55 percent of the chip. The inspection was concentrated in detecting defects in this ROM area and then correlating these defects to bitmap failures.
Comparisons of six different intrafield control paradigms in an advanced mix-and-match environment
Author(s):
Joseph C. Pellegrini
Show Abstract
The introduction of DUV step-and-scan exposure tools into a mix-and-match manufacturing environment with traditional i- line step-and-repeat systems has presented many unique challenges to lithographic process engineers. One of these challenges has been the development and selection of reliable methods for controlling intrafield patten overlay registration. We examined a spectrum of overlay control methods and compared the benefits and costs of each. We analyzed six different intrafield overly control approaches including: (1) traditional static tuning to a fixed archive wafer; (2) dedicated stepper routing; (3) static LEMSYS matching; (4) static cluster sorting; (5) global feedforward control, and; (6) combined cluster sorting with feed-forward control. In traditional static tuning, each stepper is calibrated to an arbitrary fixed reference and wafers are allowed to flow freely within the entire stepper population. Dedicated stepper routing imposes a restriction that wafers must return to the same stepper during critical layer processing. With static LEMSYS matching each stepper is calibrated to an ideal reference that is generated to minimize higher order intrafield errors. Static cluster sorting uses LEMSYS data to divide steppers into clusters then critical layer exposures for a given wafer are kept within a local cluster. Feedforward control attempts to aggressively adapt magnification offsets based upon the known lens signatures of steppers used to print previous layers. Our comparisons were based upon data from 12 actual exposure systems. The result showed that significant gains in overlay control can be achieved with incremental costs in dollars and complexity. Cost-benefits analysis showed that the more aggressive control techniques, involving feed- forward control, were best suited to large fabs operating near the physical limits of their steppers.
Characterizing overlay registration of concentric 5X and 1X stepper exposure fields using interfield data
Author(s):
Francis G. Goodwin;
Joseph C. Pellegrini
Show Abstract
the cost advantages associated with implementing a mix-and- match photolithography process have led to a dramatic increase in the interest and development of these manufacturing environments. This is especially true for older fabs with high production lithography tools already in place but technology that has increased beyond the capability of the tools. For he process engineer the challenge is to define a method of optimizing the exposure field registration between each of the different imaging systems. In this paper a procedure used to evaluate intrafield and interfield overlay errors between six ASML 5X steppers and sixteen Ultratech 1X steppers is described. With this technique reticle data, stage registration and a commercially available software analysis package are used to model pattern displacement of each stepper within this population. Wafers from each steppes are first patterned with nine fields, each consisting f a 9 by 9 array of ASML alignment marks. The X and Y stage coordinate of each alignment mark is then measured using a standard ASML 5500/60 intrafield analysis routine. Spreadsheeting the resulting stage registration data, subtracting the expected or 'ideal' stage position and correcting for any reticle pattern shifts, grid and intrafield data are obtained. Using this process a data sheet for each stepper was developed and, once formatted properly, loaded onto the software analysis package for registration modeling. Use of multiple exposure fields per wafer enabled the software to characterize both intrafield and interfield registration by first modeling the grid errors, subtracting these values, and then performing intrafield analysis of the remaining data. Further, by collapsing the intrafield data into a single field a 'lens fingerprint' of each stepper lens was derived. Using vector subtraction a direct comparison was made between the lenses of each stepper and an indexed table of exposure field translation errors created. The stepper lenses were also sorted from best to worst matches. This approach generated the required 231 paired data sets needed to match each stepper to all others while exposing and measuring only 44 wafers and required no artifact wafers. Measured evaluation results will be reviewed and expansion of this procedure to mapping 1X wide field lenses and matching of non-concentric exposure fields discussed.
Overlay measurements and edge detection methods
Author(s):
Alexander I. Zaslavsky
Show Abstract
In order to perform an overlay measurement, one must use some edge detection method. Different edge detection methods have different sensitivity to noise. We define a large class of edge detection methods, and find in this class the methods least sensitive to noise. We consider continuous as well as discrete signals, and use different assumptions about the nature of noise.
Basic challenges of optical overlay measurements
Author(s):
Anatoly Shchemelinin;
Eugene Shifrin;
Alexander I. Zaslavsky
Show Abstract
The basic challenges of optical overlay measurements are discussed. It is shown that overlay measurement precision is determined by optical resolution, signal to noise ratio of the measurement system and properties of the overlay target. Some tips for better overlay target design ad hardware improvement are formulated. It is shown that an interferometer based measurement system allows better accuracy than the brightfield one. It is shown that for current measurement techniques, 130 nm design rule requirements can be met.
Characteristics of overlay accuracy after metal CMP process
Author(s):
Young-Keun Kim;
Yong-Suk Lee;
Won-Kyu Lee;
Chul-Gi Ko
Show Abstract
In this study, the experiments have been performed to obtain the optimal condition of an alignment mark ont he wafer which has low step-height by metal chemical mechanical polishing (CMP) process. The pitch of the alignment mark was fixed, then the duty ratio was varied to obtain the optimum duty ratio of an alignment mark. It was also tried to find out the type of an alignment sensor and the optimum polarity of an alignment mark. The step-height has been varied by changing the amount of polishing of tungsten, which is used as a contact plug in these experiments. The coherence prove metrology was employed to measure the overlay accuracy. The alignment mark profile was observed using scanning electron microscope and atomic force microscope after metal CMP. The best overlay accuracy has been obtained when the duty ratio of an alignment mark is in the range of 0.6 and 1.67 for three types of alignment sensor. It was also found that the polarity of the alignment mark was not the dominant factor of the overlay accuracy for TTL-monochromatic method and heterodyne method. Concave mark is better than convex mark at the off axis alignment sensor. As the step height of the alignment mark lowered by the CMP process, the probability of alignment error was increased for the off-axis alignment sensor because of the lowering and dishing of an alignment mark. The overlay accuracy doe snot change much depending on the W CMP target for the case of TTL-monochromatic sensor and heterodyne alignment sensor at 4:4 duty ratio.
Stability of glass probe tips for critical dimension measurement
Author(s):
Joseph E. Griffith;
Gabriel L. Miller;
Leslie C. Hopkins;
Charles E. Bryson III;
E. J. Snyder;
J. J. Plombon;
Leonid A. Vasilyev;
Jeffery B. Bindell
Show Abstract
One of the fundamental requirements for reliable critical dimension measurement with a scanning probe microscope is stability of the stylus against flexing and against erosion. We report on the wear of an etched optical fiber when scanned across a variety of surfaces. The optical fiber probe tip was used in a novel scanning probe microscope employing a balance beam force sensor.
Unique approach to high-performance magnification calibration
Author(s):
Douglas P. Hansen;
Michael Lines;
Donald A. Chernoff;
Jason D. Lohr
Show Abstract
Dimensional calibration standards are an important metrology tool for quality control, inspection, and fault analysis. Tools such as atomic force microscopes (AFM), scanning probe microscopes (SPM), or scanning electron microscopes (SEM) require regular calibration to meet the needs of current and projected production processes. Suitable calibration standards have been expensive, difficult to use, and of limited utility. These limitations were, to a large degree, a result of the fabrication process and the accompanying measurement calibration paradigm. Any approach to microscope calibration should make calibration easier, less expensive, and more useful. An improved calibration standard would also be amenable to automation of the calibration process for use in production line instruments. The necessary features include: (1) the ability to calibrate the entire viewing field instead of discrete points; (2) the ability to easily locate and use the calibrated region; (3) the ability to calibrate on the nanometer scale where the most demanding applications push the state of the art; (4) significantly reduced specimen costs. There is an alternative production method for calibration specimens which meets the above criteria. It is based on the concept of physically replicating a light interference pattern to provide the essence of an interferometer in a simple calibration specimen. Modern optics technology has reached the point where large area, very accurate nd regular interference patterns in 1 and 2 dimensions can be produced. The basic physics of the process enables the periodicity of these patterns to be specified and controlled to fractions of a nanometer over these very large areas. This large-area interference pattern can be captured in a physical record suitable for viewing under the microscope. The issues affecting the accuracy and utility of this physical record and its preparation for use as a magnification standard will be discussed. Experience in sue in AFM applications indicates that calibration samples produced by this method can deliver repeatable accuracy of 1.5 nm if properly employed and analyzed. This methodology can be extended to other imaging microscope technologies.
Novel near-field optical probe for 100-nm critical dimension measurements
Author(s):
Brian R. Stallard;
Sumanth Kaushik
Show Abstract
Although the theoretical resolution for a conventional optical microscope is about 300 nm, it is normally difficult to obtain satisfactory critical dimension (CD) measurements below about 600 nm. E-beam technology has been popular for sub-500 metrology but also has well known limitations. Scanning probe and near-field optical methods have high spatial resolution. Yet they are il-suited for routine CD metrology of high aspect ratio features because of a combination of short working distances and large tips. In this paper we present the concept and initial modeling results for a novel near-field optical probe that has the potential of overcoming these limitations. The idea is to observe resonance shifts in a waveguide cavity that arise from the coupling of the evanescent field of the waveguide to perturbations beneath the waveguide plane. The change in resonance frequency is detected as a change in the transmission of a monochromatic probe beam through the waveguide. The transmitted intensity, together with the appropriate signal processing, gives the topography of the perturbation. Our model predicts that his probe is capable of determining the width of photoresist lines as small as 100 nm. THe working distance is much more practical than other probe techniques at about 100 to 250 nm.
Statistical measure for the sharpness of SEM images
Author(s):
Nien-Fan Zhang;
Michael T. Postek Jr.;
Robert D. Larrabee;
Andras E. Vladar;
William J. Keery;
Samuel N. Jones
Show Abstract
Fully automated or semi-automated scanning electron microscopes (SEM) are now commonly used in semiconductor production and other forms of manufacturing. Testing and proving that the instrument is performing at a satisfactory level of sharpness is an important aspect of quality control. The application of Fourier analysis techniques to the analysis of SEM images is useful methodology for sharpness measurement. In this paper, a statistical measure known as the multivariate kurtosis, is proposed as a useful measure of the sharpness of SEM images. Kurtosis is designed to be a measure of the degree of departure of a probability distribution from the Gaussian distribution. It is a function of both the fourth and the second moments of a probability distribution. For selected SEM images, the two- dimensional spatial Fourier transforms were computed. Then the bivariate kurtosis of this Fourier transform was calculated as though it were a probability distribution, and that kurtosis evaluated as a characterization tool. Kurtosis has the distinct advantage that it is a parametric measure and is sensitive to the presence of the high spatial frequencies necessary for acceptable levels of sharpness. The applications of this method to SEM metrology will be discussed.
Automatic classification of spatial signatures on semiconductor wafer maps
Author(s):
Kenneth W. Tobin Jr.;
Shaun S. Gleason;
Thomas P. Karnowski;
Susan L. Cohen;
Fred Lakhani
Show Abstract
This paper describes spatial signature analysis (SSA), a cooperative research project between SEMATECH and Oak Ridge National Laboratory for automatically analyzing and reducing semiconductor wafermap defect data to useful information. Trends towards larger wafer formats and smaller critical dimensions have caused an exponential increase in the volume of visual and parametric defect data which must be analyzed and stored, therefore necessitating the development of automated tools for wafer defect analysis. Contamination particles that did not create problems with 1 micron design rules can now be categorized as killer defects. SSA is an automated wafermap analysis procedure which performs a sophisticated defect clustering and signature classification of electronic wafermaps. This procedure has been realized in a software system that contains a signature classifier that is user-trainable. Known examples of historically problematic process signatures are added to a training database for the classifier. Once a suitable training set has been established, the software can automatically segment and classify multiple signatures from a standard electronic wafermap file into user-defined categories. It is anticipated that successful integration of this technology with other wafer monitoring strategies will result in reduced time-to-discovery and ultimately improved product yield.
Advanced inspection for 0.25-um-generation semiconductor manufacturing
Author(s):
Arye Shapiro;
Thomas James;
Brian M. Trafas
Show Abstract
The goal of SEMATECH Joint Development Project J101 was to accelerate the development of a patterned wafer inspection tool to meet the sensitivity, throughput, and cost requirements for 0.25 micrometers and 0.18 micrometers technology generations. To accomplish this goal, SEMATECH partnered with Tencor Instruments to develop the Tencor Surfscan Advanced Inspection Tool (AIT). This tool is capable of inspecting 100-200 mm wafers with random and repetitive patterns for both particulate contamination and pattern defects. This capability, combined with its high throughput, makes the Surfscan AIT useful as an in-line process monitor. In order to determine its performance on product wafers in a manufacturing environment, a beta version of the AIT system was evaluate at Advanced Micron Devices Fab 25, in Austin, Texas. The evaluation was conducted according to the SEMATECH qualification plan. The final tool development, IRONMAN testing, and beta site evaluation will be described in this paper.
Improved defect detection performance at metal and contact etch levels using a new optical-comparison segmented-autothreshold technology
Author(s):
James F. Garvin Jr.;
Kevin Keefauver;
Mark Tinker
Show Abstract
This report summarizes the results of a beta site evaluation performed on KLA's Segmented Auto-Threshold technology, more commonly known as S.A.T., in the TI's DP1 development facility form MArch to August 1996. This technique was primarily designed to eliminate the effect of nuisance defects at the metal etch levels and thereby allow the KLA machine to improve its sensitivity at these levels. Two S.A.T. recipe optimization techniques, basic and customized, were compared to the standard mean/range techniques being used today in manufacturing at contact etch, metal-1 etch, and metal-4 etch. The basic S.A.T. technique is a more simplified technique which is designed to be more easily implementable in a manufacturing environment but would probably not demonstrate as good a sensitivity as customized recipes. Customized recipes offer the promise of much better sensitivity but optimization is extremely involved and not readily implementable in manufacturing at this point. Results indicated that customized S.A.T. recipes captured several times more defects than seen by the traditional mean/range recipes at the metal etch levels.Conversely, basic S.A.T. showed little improvement over mean/range recipes at these types of levels. However, for contact etch only, basic S.A.T. did show significant improvement, finding almost three times as many defects as mean/range. For the metal etch levels, the significant increase in the number of defects caught with the customized S.A.T. recipes was observed for almost every defect type, including a very important category called 'shorts', which are particles whose positions on the die indicate a high probability to cause a probe failure. In addition to comparison data for defects detected on wafers randomly sampled form production lots at each of the 3 process levels, repeatability data rom dedicated product wafers ar also presented for both customized S.A.T. and basic S.A.T. recipes. The standard deviations of the repeatability data for both the customized and the basic S.A.T. recipes were as good or better than those seen with the mean/range recipes. Non- S.A.T. data are also shown for both before and after the S.A.T. hardware/software installation which verify no loss in system sensitivity due to the changes.
Detecting lithography's variations: new types of defects for automatic inspection machines
Author(s):
Paul Gudeczauskas;
Erez Ravid
Show Abstract
Photolithography for silicon semiconductor device manufacturing is a crucial technology in the race to denser and more highly integrated circuits. To achieve an acceptable wafer throughput, most steppers use a combination of global and site-to-site alignment. Focus and exposure are controlled based on a limited number of fields. Post develop evaluation of the pattern quality is typically limited to a few fields on a few wafers. Focus and exposure shifts cause small variations in CDs that rapidly become critical yield limiters. Trends toward larger stepper fields and wafers render very small variations in magnification, distortion, rotation and translation of the patten. Rapid closed loop feedback of a photolithography problem prior to etch is critical for measuring and controlling stepper performance, reducing wafer scrap and yield loss. In this article we will demonstrate how sub-micron variations can be quickly detected with laser scanning tool, combined with pixel-to- pixel image processing. The WF-720 automatic defect inspection tool, utilizing a unique PDI configuration, enables detection of minor changes in pattern shapes based on the global pixel population behavior of the distorted patterns on the wafer.
Optical characterization of attenuated phase shifters
Author(s):
Alessandro Callegari;
Katherina Babich
Show Abstract
The optical properties of an amorphous hydrogenated carbon film utilized as an attenuated phase shifter were characterized using a n and k Analyzer. This novel instrument computes univocal values of the index of refraction n((lambda) ), extinction coefficient k((lambda) ), and film thickness from a reflectance or reflectance/transmittance single scan covering 190 nm to 900 nm. By fitting the scanned curves, values of k as a function of wavelength or energy can be calculated and then the index n is computed by using Kramers-Kronig equations. Since the n and k Analyzer calculates n, k, and thickness from a single reflectance scan, phase angles can be easily calculated at any given wavelength between 900 and 190 nm. To test the accuracy of this instrument we have compared phase angles obtained by using the n and k Analyzer against laser interferometry at 257 nm. In this technique direct phase measurements are obtained by comparing the difference in the optical path of the beams going through the quartz and the film/quartz structure. The agreement between the two techniques was very good to within 103 degrees for eight of the nine samples analyzed. Interferometer phase errors are conservatively estimated to be around +/- 3 degrees. This includes noise levels as well as day to day variations. The agreement between the two techniques resides within the experimental errors. Thus, this analyzer can give phase angle maps of a blank film on quartz substrates in a relatively short time and nondestructively.
Optical diffraction tomography for latent image metrology
Author(s):
Ziad R. Hatab;
Nasir U. Ahmed;
S. Sohail H. Naqvi;
John Robert McNeil
Show Abstract
Optical diffraction tomography (ODT) attempts to reconstruct the complex refractive index profile of an object by
inverting its backscattered and/or transmitted fields. Owing to its integral formulation of the diffracted plane, the inverse scattering
problem in ODT, i.e., reconstructing the object from its diffracted field, can be linearized via the Born approximation.
The validity range of the Born approximation is limited to weakly scattering objects, or objects whose refractive index distributions
are slowly varying and comparable in magnitude to their background. Such constraints are easily met in microlithography
when considering the area of latent image metrology. Indeed, latent images are generally characterized by their relatively
small and slowly varying refractive indices. An algorithm is presented for reconstructing the refractive index distribution of
latent images from their first (+1) and second (+2) reflected diffraction orders at the Bragg angle.
Monte Carlo simulation of charging effects on linewidth metrology
Author(s):
Yeong-Uk Ko
Show Abstract
The charging effects have been investigated when the linewidth of the insulator is measured by scanning electron microscope in secondary electron detection mode and with the low accelerating voltage around 1 kV. The yield of the electron generation is near the unity for most material under low voltage condition, and is slightly different from unity depending on the material and geometry of the pattern. For insulators, however, such a yield difference leads to locally different charge accumulation which influences on the measured linewidth. In this paper the influence of the charging effects on linewidth in PMMA/Si wafer is analyzed by Monte Carlo method as the operating condition and geometrical shape changes.
Resist and etched line profile characterization using scatterometry
Author(s):
Christopher J. Raymond;
S. Sohail H. Naqvi;
John Robert McNeil
Show Abstract
In previous applications scatterometry has shown promise as a metrology for several process measurements. The linewidths of both resist and etched features, and the thicknesses of several underlying film layers, have been accurately characterized using the technique1 . Up until recently these results have been obtained by assuming the features being measured possessed a nominally square profile. However, as metrology tolerances shrink in proportion to device dimensions, errors in the measurement technique due to non-square line profiles could become significant. To test the ability of the scatterometry technique to measure non-square profiles, two models have been developed. The first profile model assumes the top and bottom corners of a resist line can be approximated as a segment of some circle with a given radius. With the center of the circle fixed in space by the overall height of the resist and a nominal linewidth, the sidewall of the line is then modeled as the tangent line that connects the two circles. This particular model can accommodate both overhanging (<900) and trapezoidal sidewalls (<900) with just four parameters: the radius of the top and bottom corners, and the nominal top and bottom linewidths. Comparisons between cross-section SEM images and scatterometry profiles using this model will be presented. The second model, which we call the "stovepipe' model, is a modified version of a simple trapezoid model and has applications to etched features. In this model an etched line is parameterized by assuming the trapezoidal portion of the sidewall starts at some distance below the top of the line, with the top portion of the line remaining square. In this manner an etched profile can be modeled with four parameters: the overall height of the etched line, the nominal etched linewidth, and the overall height and sidewall angle of the trapezoid layer. Once again, scatterometry profile results in comparison to cross-section SEM images will be presented. The use of both of these models has reduced the difference between scatterometry and SEM CD measurements. For example, the average difference of twelve resist CD measurements, when compared to crosssection SEM measurements, improves from 19.3 to 10.1 urn when the full profile model is incorporated.
Keywords: metrology, diffraction, optical metrology, scatterometry, process control
Highly accurate CD measurement with a micro standard
Author(s):
Katsuhiro Sasada;
Nobuyoshi Hashimoto;
Hiroyoshi Mori;
Tadashi Ohtaka
Show Abstract
Accurate measurement with CD-SEMs requires the use of a calibrated standard. A new standard, micro-scale was developed using laser interferometer lithography and anisotropic chemical etching on Si-material and was reported previously. In this paper, we report on a method to control measurement accuracy of CD-SEMs using the micro-scale. We have studied various factors for measurement errors and have estimated the 95 percent confidence level. We have carried out 3-pitch measurement of the micro-scale in a fully automated mode and estimated the 95 percent confidence level. Then, we compared two 95 percent confidence levels and concluded that the estimation expected from the measurement errors was reasonable.
Advanced FTIR techniques for photoresist process characterization
Author(s):
Ronald A. Carpio;
Jeff D. Byers;
John S. Petersen;
Wolfgang Theiss
Show Abstract
Several applications of Fourier transform IR spectroscopy (FTIR) for the characterization of photoresist thin films are demonstrated. The applications are accurate resist thickness measurements, monitoring of solvent loss during the post-apply-bake, determination of the glass transition temperature, and deprotection reaction kinetics. Model based, spectral analysis is applied for the determination of photoresist thickness from mid-FTIR spectra and is shown to have linear correlation to measurements with UV-visible spectroscopic ellipsometry. Using this capability in conjunction with an external reflection accessory and rapid data acquisition hardware and software, measurements are performed on Shipley SPR-510L photoresist during the post apply bake step, deriving thickness and solvent loss information. The use of this approach is also explored for making glass transition measurements of an environmentally stable chemical amplification positive resist photoresist. Finally, in-situ PEB studies are illustrated for APEX-E photoresist. For off-line analysis, an in-sample compartment mapping accessories is applied to the characterization of multiple open frame exposure matrices on 200 mm double-side polished wafers.
CMP overlay metrology: robust performance through signal and noise improvements
Author(s):
John C. Podlesny;
Francis Cusack Jr.;
Susan Redmond
Show Abstract
Historically, effective overlay registration measurement of chemical mechanical polish (CMP) processes has posed a challenge to optical based metrology systems. This is primarily due to the extreme planarization process of CMP which produces very small transition step heights between adjacent features resulting in a very low signal differential. The repeatability and accuracy of overlay measurements is intimately relate to the signal to noise ratio and the signal to interference ration of the acquired data. Measurement performance of CMP overlay targets was improved through enhancement of the optical signal and electronic noise reduction. Hewlett Packard provided a series of wafers having a range of tungsten deposition thickness and CMP processing. These were measured on the IVS-120 fully automated optical metrology system to quantify the effects of the improvements. The result was overlay registration measurement repeatability which was typically better than 3 nanometers 3-(sigma) and tool induced shift of less than 1 nanometer.
Alternative method for monitoring an in-line CD SEM
Author(s):
Pedro P. Herrera;
Susan A. Dick;
John A. Allgair
Show Abstract
Semiconductor manufacturers must ensure that their in-line critical dimension scanning electron microscopes (CD-SEMs) are providing precise and reliable data on a daily basis. As with other process equipment, tool stability and production worthiness is determined by a daily qualification procedure that involves measuring a reference, etched wafer's linewidth and comparing those results to a set target mean. However, repeated exposure to a SEM creates an unacceptable increase in the measured feature's CD. This increase can be disruptive to tool qualification, requires the introduction of new reference wafers, and ultimately limits the tool's availability to production. A new method for daily qualification using a rotating daily job scheme has been developed and employed for monitoring multiple systems at Motorola MOS-13/APRDL. This new procedure allows for better statistical process control, increase the reference wafer's useful life, and provides an easier method of monitoring the tool throughout its lifetime.
E-beam-induced distortions on SiN x-ray mask membrane
Author(s):
Nikolai L. Krasnoperov;
Zheng Chen;
Franco Cerrina
Show Abstract
In this work we characterized (1) the resist stress dependence on exposure dose and (2) in-plane distortions of the mask caused by resist and substrate. A NIST standard ring with a SiN membrane window as used throughout this work. Resist stress was determined by measuring the resonant frequency of membrane coated with resist. Stress was measured at several doses for SAL605, APEX-E and PMMA resists. In-plane distortion was measured using in-situ measurement approach. An array of standard alignment fiducials for Leica-Cambridge EBMF10.5 e-beam system were placed directly on the membrane. Also, an array of fiducials was placed on the NIST ring to provide a reference point for measurements. The position of the fiducials on the membrane was measured before and after exposure, and compared to position of a common reference point. The magnitude of displacements agreed with theoretical values for measured stress coefficients. The accuracy and limitations of the methods used to obtain the distortion data, as well as possible strategies for reducing the in-plane distortions are also discussed here.
Single-feature metrology by means of light scatter analysis
Author(s):
Joerg Bischoff;
Karl Hehl
Show Abstract
During the last years, the prospects ofthe angle resolved optical scatterometry as an alternate and supplementary measuring
technique in micrometrology were discussed. In a series of publications, the potential of the method was shown to be able
to meet the challenges of quarter micron technology and beyond. However, until now, to our knowledge the applications were
mainly confmed to periodic patterns. In this paper, the extension of the method to the characterization of single features is
outlined by means of a theoretical simulation of the simple one-dimensional case. Therefore, we assume that a laser beam
focussed down to about one micron in one dimension illuminates the target under investigation, e.g., a resist line on silicon.
Because of the fmite spot diameter of the light probe, the method may be named as focussed beam optical scatterometry
(FBOS). Then, quite similar to the angle-resolved scatterometry (ARS), the diffracted far field intensity is calculated in
dependence on the angle. The paper shows that the basic tools of the grating theory can be taken over. Additionally, only
the convolution of the calculated transfer functions, i.e., reflection and transmission coefficients for plane wave incidence,
with the angular spectrum ofthe incident focussed beam has to be carried out. The selected grating period and accordingly
the stepwidth of the angular spectrum can be chosen properly to prevent numerical artefacts. In this paper, the diffraction
calculation will be done by means ofthe rigorous coupled wave approach (RCWA). Based on this model, several one-dimensional
measuring problems have been investigated by means of multivariate regression and cross validation with the fmal goal
to assess the measurement accuracy. To emulate real conditions, the computed light scatter distribution was degraded by a
Gaussian noise and a certain depth of focus was admitted by shifting the focus of the laser spot vertically. In conclusion, it
may be claimed that the FBOS may be able to enhance the resolution limit as well as the measurement accuracy in
comparison to imaging optical microscopy.
Novel approach for defect detection and reduction techniques for submicron lithography
Author(s):
Jonathan A. Orth;
Khoi A. Phan;
David Ashby Steele;
Roger Y. B. Young
Show Abstract
Accurate and reproducible microlithography processing is critical for developing smaller
and more dimensionally accurate semiconductor structures. As modern microprocessors
and memory devices scale down to deep submicron dimensions, defects originating in the
microlithography processes become increasingly effective in reducing yield. Careful and
efficient methods of measuring the variability of these defect levels by utilizing a shortloop
monitoring process is essential in controlling the quality of lithography process for
these semiconductor devices. During the conventional photo process, a defect can result
from either an external process variable (e.g. manual wafer handling), or an internal one
from environmental sources (unclean equipment sets). Others may be related to the
process parameters themselves; such as a pattern anomaly, marginal processing by the
equipment, or a previous defect on the wafer creating a nucleation site for more defects.
Since microlithography defects can arise from a variety of sources, adopting flexible and
efficient methods of measuring their effects are essential in maximizing the yield.
This study will discuss the methodologies used to characterize and monitor complete
microlithography processing for two distinct cases: one in which the resist is mostly
unexposed with the exception of a pattern of contact holes, and one in which most of the
resist is exposed, leaving behind a developed pattern of resist lines. These two strategies,
when used in conjunction and properly sampled in a defect metrology tool can lead to
timely in-line feedback about the nature of possible processing defects present.
Furthermore, the results of such a short loop may suggest continued short loop processing
involving fewer processing steps to narrow the source.
Monitoring optical properties and thickness of PECVD SiON antireflective layer by spectroscopic ellipsometry
Author(s):
Carlos L. Ygartua;
Kathy Konjuh;
Shari Schuchmann;
Kenneth P. MacWilliams;
David Mordo
Show Abstract
Processes for PECVD SiON Anti-Reflective Layer (ARL) films are currently being developed for application of DUV lithography. The shorter wavelength allows for higher pattern resolution. Anti-reflective films are needed to reduce thin film interference effects and reflective notching, which limit the control of critical dimension (CD) variations. The refractive index (n) and extinction coefficient (k) at the exposure wavelength, in addition to the film thickness (t), are needed to predict the film's anti-reflective property. Broadband ultra-violet spectroscopic ellipsometry (SE) is uniquely capable of directly measuring the refractive index dispersion and thickness of single layer thin films, especially in the critical UV region. The refractive index (RI) dispersion is physically determined by the compositional characteristics of the film. This paper evaluates the robustness of various RI dispersion models in relation to the compositional variations of the SiON films. The n and k values are correlated with the SiON stoichiometry, including hydrogen concentration from Rutherford Backscattering Spectrometry measurements. Moreover, it is found that the UV RI's are better able to track small stoichiometry variations than conventional RI at 633 nm.
3D imaging of VLSI wafer surfaces using a multiple-detector SEM
Author(s):
Yaron I. Gold;
Radel Ben-Av;
Mark Wagner
Show Abstract
Measuring and inspecting the topography of very small features during the fabrication of VLSI devices plays a major role in process control and production yield enhancement. Optical and confocal microscopes lack the resolution needed for today's most advanced processes. Atomic Force Microscopy provides the necessary x, y, and z measurements, but is relatively slow and has limitations due to the necessity of placing a probe with finite size into circuit structures. Scanning electron microscopy (SEM) is today's preferred technology for this task. However, traditional SEM images have a 'flat' vertical appearance, and do not present an accurate representation of the actual surface topography. Tilted SEMs provided a view of wafer topography, but they are slower and are limited to projectile images of the device which do not always show complete structures such as contact holes. A multiple detector SEM enables the development of a method for producing high resolution images of the sample's surface. These images, processed on silicon graphics work stations, produce 3D renditions of the surface from different points of view while maintaining the capability of visualizing complete surface features. In this paper we describe the above method, and provide results, including sample surface profiles and images obtained from actual wafer samples.
Offline programming of CD-SEM systems enhances wafer fab productivity
Author(s):
Rudolf Schiessl
Show Abstract
The high cost of semiconductor manufacturing equipment and facilities has driven the industry to develop cost of ownership models to evaluate equipment productivity. However, traditional cost of ownership models do not accurately reflect the negative effects of system setup time on overall equipment productivity. The overall equipment effectiveness model is one model which addressed these shortcomings considering system setup time as nonproductive time. We have discovered an opportunity to decrease this nonproductive time with off-line programming capabilities on the new Opal-CD-SEM workstation. Off-line programming allows the process engineer to develop equipment setup recipes concurrent with system operation in production. This new system allows the process engineer to use the fabrication area's existing computer network to access the mask CAD database, stepper job data, and Opal's library of program elements to create a process recipe long before any wafers are actually processed. In practice, this reduces nonproductive equipment setup time and enlarges the overall production utilization of the CD-SEM. In fabs with relatively frequent changes in the production product mix, these improvements have increased productivity significantly.
Error estimation for lattice methods of stage self-calibration
Author(s):
Michael R. Raugh
Show Abstract
For more than a dozen years now, stage self-calibration methods have been proposed for
ameliorating overlay alignment problems. The idea is that a stage, calibrated to an
accurate Cartesian standard, can reduce butting errors and, in mix-and-match, eliminate
the need to force all line-of-duty lithography tools to conform to an identically distorted
standard, with all of the re-adjustments and bookkeeping that entails. For all of their
promise, there are important issues that must be resolved before self-calibration strategies
can be trusted. The major problem is the determination of rigorously established error
bounds - self-calibration procedures can be useful only if they assuredly improve the
accuracy of the machines they are used to calibrate. However, in a previous paper the
author has shown that a basic operation in so-called lattice methods of self-calibration
procedures, namely rotation of a symmetric artifact, leads to an unstable set of equations,
raising the question whether such procedures can possibly yield improved accuracy. This
paper addresses that problem explicitly and thereby demonstrates a method for deriving
the needed error bounds. The example used is a rotationally symmetric (regular) polygon.
For the purpose of exposition, qualitative algebraic, not numeric, results are emphasized,
but the method can be applied to produce the necessary numbers for general calibration
lattices. The method can be applied to error estimation for practical stage self-calibration.
Innovations in monitoring for sub-half-micron production
Author(s):
Teresa L. Lauck;
Kristin Wiley
Show Abstract
As wafer substrates increase to 200 mm and critical dimensions decrease below half micron, defect density control becomes more critical. Defects must be monitored and minimized in a timely manner in order to maintain profitability. In addition, as the number of masking layers increases, so do the cycles through the photolithography cells that also highlights the need for improved imaging quality. The best case scenario would be to monitor imaging integrity prior to committing production wafers. One of the greatest stimulators for innovation can be information. Such was the case for us, when we received a new KLA 2132 that allowed us to inspect a higher percentage of the patterend surface within a wafer. The innovation discussed in this paper is the process of combining several individual non-production monitors into one comprehensive test monitor. This one new monitor inspected by KLA has provided better information on photo cell integrity then previously available. This non-production monitor allows us to guarantee the quality of the photo imaging cell prior to committing production materials. The use of this new comprehensive test monitor has enabled us to improve imaging quality and reduce rework rates without a significant impact to production uptime.
CMP-compatible alignment strategy
Author(s):
Eric Rouchouze;
Jean-Michel Darracq;
Jack Gemen
Show Abstract
As semiconductor technology continues its way towards smaller geometries, CMP has gained acceptance as the planarization technique for interconnect layers. Its benefits are well known, especially in terms of imaging. However, one of its major drawbacks is to make difficult the alignment of interconnect layers, since a planarized alignment mark is less visible for the stepper's alignment system. Usual workarounds include the clearing of process layers from the alignment mark before exposing the product layer. Although these workarounds provide a temporary solution, they are too costly to be viable in a mass production environment. In this experiment, a non-zero alignment strategy using new mark designs has been tested on the backend layers of a 0.35 micrometers CMOS process. New mark designs have been evaluated, where the space part of the gratings has been filled with 'segments' of various width, the purpose being to minimize the planarization effect of the metallization process. For the selection of the best mark design, several criteria have been taken into account: the stepper's built-in alignment diagnostic software provides information on the quality of the alignment signal. The most important criterion is the product overlay measurement and its repeatability. Marks cross sections using a FIB/SEM tool give indications on the mark profile after metal deposition.
Complementary alignment metrology: a visual technique for alignment monitoring
Author(s):
David H. Ziger;
Pierre Leroux
Show Abstract
Complementary alignment metrology (CALM) is a new metrology technique to visually measure stepper alignment correctable factors such as horizontal, vertical and rotation offsets as well as magnification errors. CALM is based on the concept that a line and space pattern exposed into resist will e completely cleared if, prior to development, it is exposed a second time by shifting the grating by exactly its half- pitch. We have used this principle to fabricate test wafers that visually indicate correctable factors. The estimated 3(sigma) accuracy of CALM readings compared to box-in-box measurements is 0.03 micrometers . Linearity between CALM readings and box-in-box measurements is maintained for misalignments of +/- 0.13 micrometers . Using such a technique allows baseline corrections to be performed on a more frequent basis.
Precise measurement of ARC optical indices in the deep-UV range by variable-angle spectroscopic ellipsometry
Author(s):
Pierre Boher;
Jean-Louis P. Stehle;
Jean-Philippe Piel;
Christophe Defranoux;
Louis Hennet
Show Abstract
Antireflective coatings and photoresists are characterized precisely by spectroscopic ellipsometry from near IR to deep UV 190nm. A procedure based on the use of a polynomial dispersion law to take into account the optical indices of the films int he region where they are transparent is used. Thickness values provided by this technique are checked independently by grazing x-ray reflection technique at the cobalt K-(alpha) line. The procedure is valid for a range from very thin ARCs to very thick ones. Knowing the thickness of the film a new method to extract point to point the optical indices of the layer in the entire wavelength range has been developed; at each wavelength, we use different couples of values measured at different incidence angles, and we adjust the optical indices using a Levenberg Marquard algorithm. Compared to the standard point to point procedure, this method has tow main advantage: at the extrema of the interference fringes the better stability of the algorithm leads to more accurate extraction even for very thick film, due to the fact that the position of the extrema is slightly dependent of the angle of incidence; another interest is that the optimum angle of incidence for a given wavelength is generally always included in the interval. A final interest is that standard deviations on the n and k values are also extracted directly using this method giving an evaluation of the accuracy of the procedure at each wavelength. This method is applied first to transparent SiO2 thick film on silicon to demonstrate the better accuracy and then to antireflective coatings of various thickness down to deep UV range. Easy simulations of normal reflectance properties can then be made very easily using these optical indices.
Contact holes: a challenge for signal collection efficiency and measurement algorithms
Author(s):
Eric P. Solecky;
Charles N. Archie
Show Abstract
As critical features become smaller, metrology becomes more challenging; this is especially true of contact holes and trenches with diameters less than 0.25 micrometers . Reduced collection efficiency of secondary electrons from such contact holes can provide false edge sharpening and may interfere with obtaining critical information from the bottom of the contact hole. Whether the contact hole is open or closed is a key concern and can be determined from the waveform in many cases. This paper introduces a measure of the collection efficiency by comparing signal strength from the contact hole with a blanked beam signal and a method or determining whether the contact hole is open or closed. The collection efficiency measure can be used to determine whether information rom the bottom of the contact hole is meaningful. When found to be meaningful, the information is then evaluate to flag the contact hole's status. Results will be based on saved image analysis and lead into a discussion on the value of these parameters. Reporting collection efficiency and whether the contact hole is open or closed, along with the diameter measurement, can provide a measure of quality not yet achieved in CD metrology.
Wavefront engineering from 500-nm to 100-nm CD
Author(s):
David Levenson
Show Abstract
'Wavefront engineering' is the discipline of producing an exposure pattern, adequate for delineating resist at high yield, in spite of the limitations of the imaging technology. Although this discipline relies on century-old optical insights and despite decade-past experiments demonstrating dramatic improvements in resolution and process-window, this field has languished in comparison to traditional approaches, such as exposing with shorter wavelength radiation or larger numerical apertures. Further progress in NA and (lambda) will soon be limited by physical and materials considerations, necessitating other schemes for decreasing the critical dimensions of volume-production devices. Today, with 193 nm systems delayed and non-optical approaches confronting infrastructure and economic barriers, the semiconductor industry is trying to adopt such wavefront engineering techniques as off-axis illumination, optical proximity correction and phase-shifting masks. CAD/CAM methods similar to those applied to optimize lenses and chips now are being applied to optimize the exposure-dose pattern itself.
0ptical lithography--thirty years and three orders of magnitude: the evolution of optical lithography tools
Author(s):
John H. Bruning
Show Abstract
The evolution of optical lithography is traced back more than 30 years to its beginnings with contact printing. As the complexity of integrated circuits increased, the intolerance for defects drove the industry to projection printing. Projection printing was introduced in the early 1970's by imaging the full wafer at 1:1 magnification. The rapid increase in wafer sizes was accommodated by annular field scanning using 1:1 imaging mirror systems. Decreased linewidths and tighter overlay budgets combined with larger wafers created huge difficulties for the mask maker which weren't relieved until the introduction of reduction step- and-repeat printing of small blocks of chips in the late 1970's. Further demands for smaller linewidths and larger chips have driven optical lithography to shorter wavelengths and to scanning the chip in a step-and-scan printing mode. Future advancements in lithography will likely combine novel scanning techniques with further reductions in wavelength.