Computation lithography—the use of mathematical modeling and computational techniques in photolithography-system design and analysis—has its roots in lithographic simulations in the 1970s. It has since grown to encompass many technologies that are indispensable for integrated-circuit development,^{1–3} such as model-based optical-proximity correction (OPC), post-OPC verification, hotspot fixing, inverse lithography, and double patterning.^{4} Much effort in lithography research has focused on ensuring that the virtual and physical manufacturing environments match each other as closely as possible while allowing for efficient numerical computation.

Real manufacturing settings usually contain variabilities, most commonly represented by dose and focus variations that exist during imaging. Designs using nominal imaging parameters may not result in the expected, correct outcomes when the actual, physical system deviates from its predefined conditions. Proper analysis and quantification of manufacturing variabilities is therefore important. They dictate a need for sufficient design margins which, in turn, affect the achievable circuit performance. Consequently, variation-aware approaches are increasingly important for ‘design for manufacturability’ (DFM). For instance, we have shown how to incorporate focus variations in inverse lithography, leading to more robust mask designs.^{5,6}

However, much less attention has been paid to the inherent variabilities in computation-lithography algorithms, which must be analyzed to quantify, monitor, and minimize fluctuations. We argue that it is also necessary to devise reasonable specifications that are realistic, practical, and useful, without imposing unrealistic requirements or expectations.^{7,8}

Consider the example of identifying lithography hotspots (regions of a layout that have relatively poor lithography latitude or resolution) that are found during lithography process checks. Their existence indicates a tendency to yield loss. However, lithography-process-check algorithms exhibit variability, leading to fluctuations and mismatches in simulation results. Two identical layout configurations may, therefore, have inconsistent hotspot classifications. We have proposed several sources of error to account for this effect,^{7,8} including simulator-to-self and simulator-to-simulator variability.

First, simulator-to-self variability (or ‘auto-inconsistency’) can exist under a range of circumstances. For instance, when the same physical design (such as a standard cell) is represented (instantiated) twice but at different orientations (e.g., a 90^{°} difference: see Figure 1), even assuming horizontal-vertical illumination symmetry, their simulation results may still be different if the lithography model exhibits asymmetries.

Variability may also occur if the image is computed using the ‘dense’ approach.^{9} Here, image intensities are computed for a set of grid points (see Figure 2). Two representations of the same physical design with a relative translation of Δ*x*, Δ*y* (non-integer multiples of the spacing δ) would lead to images that are slightly different because of interpolation.

**Figure 1. **Variability caused by orientation.

**Figure 2. **Variability caused by image interpolation.

Measures that improve computation throughput may also contribute to variability. In Figure 3, a long wire terminates at a region densely populated with shapes containing our hotspot of interest. Through hierarchical operation or parallel processing, the long wire may be truncated to limit the area of computation. When two representations of the layout are truncated differently, the resulting edges of the truncated wire would be segmented differently as well, leading to variations in the final (retargeted) layouts. Some OPC algorithms adjust edges sequentially so that masks are dependent on the order in which the edges are moved. Starting OPC iteration from opposite corners of a given layout could result in substantial simulated critical-dimension (CD) discrepancy (see Figure 4). (The CD is the limit to which the entire system can be scaled down.)

**Figure 3. **Variability caused by image computation.

^{7} The segmented edge lengths are between d

_{min} and d

_{max}.

**Figure 4. **Variability caused by the order of edge movement.

^{7}**Figure 5. **Example probability distribution of simulated critical dimension.

^{7}Second, it is often impractical to apply the same algorithm during the hotspot-analysis stage in physical DFM as for signoff verification. Using different algorithms leads to many more sources of inconsistency because algorithms that make disparate accuracy-throughput tradeoffs result in discrepant images. In addition to all sources of simulator-to-self variability, simulator-to-simulator variability may be due to model fitting, interpolation, contouring, numerical differentiation, and process robustness.

Algorithm variability implies that the results are not perfectly accurate nor absolutely precise. Consider a hotspot-analysis procedure where we first identify the candidate hotspots, compute their CDs, and classify each as (non-)hotspot on the basis of its CD^{7,10} (see Figure 5). Using any algorithm will result in a noisy CD distribution (dotted line). Two independent analyses (reference analysis and second trial) would result in two noisy distributions. We define the hotspot-matching rate as the number of common hotspots from the two algorithms with respect to the number of hotspots in the reference. In addition, the ‘extra rate’ is the number of hotspots not present in our reference analysis, divided by the number of hotspots in the reference. We computed these rates for various scenarios,^{7,8} including 15 and 5% variations in both simulators, 5% variation in both but with a 5% increased trial threshold, and a combination of 5 and 15% variation for the reference and trial simulator, respectively. For this set of experiments, we retrieved matching and missing rates of, respectively, 43.9 and 56.1%, 73.4 and 26.6%, 89.4 and 63.5%, and 62.3 and 82.9%. Algorithm variability can thus substantially affect the quality of the results.

Variability must be addressed at both algorithm and application levels. For the former, we need to improve precision by controlling and eliminating its root causes, while for the latter we need to set specifications that are commensurate with both the limitations of the algorithms and the goals of the application.^{7} This summarizes our future research directions.

*This work was partially supported by grants from the Research Grants Council of Hong Kong (projects 713906 and 713408).*

Edmund Y. Lam

University of Hong Kong

Hong Kong, China

Edmund Lam received BS, MS, and PhD degrees in electrical engineering from Stanford University. He first worked for KLA-Tencor in San Jose, California, and is currently associate professor in electrical and electronic engineering. He is the founding director of the Imaging Systems Laboratory and conducts research in computational optics and imaging, particularly regarding their applications in semiconductor manufacturing.