Share Email Print
cover

Proceedings Paper

Application of deep learning algorithms for Lithographic mask characterization
Author(s): Dereje S. Woldeamanual; Andreas Erdmann; Andreas Maier
Format Member Price Non-Member Price
PDF $14.40 $18.00
cover GOOD NEWS! Your organization subscribes to the SPIE Digital Library. You may be able to download this paper for free. Check Access

Paper Abstract

The appearance of defects on the photomask is a key challenge in lithographic printing. Printable defects affect the lithographic process by causing errors in both the phase and magnitude of the light and of the sizes and location of the printed features. Presently 193 nm optical inspection tools are still the main ones for detecting pattern defects on EUV masks.1 However, pattern sizes on EUV masks could not be detected due to the resolution limit of 193 nm inspection tools. We propose and investigate the application of Convolutional Neural Networks (CNNs) to characterize and classify defects on lithographic masks. This paper details the training and evaluation of the CNNs to classify defects in simulated aerial images of an EUV setting. The simulation software Dr.LiTHO is used to simulate aerial images of defect-free masks and of masks with different types and locations of defects. Specifically we compute images of regular arrays of squares to be imaged with typical settings of EUV lithography (λ = 13.5 nm, NA= 0.33). We consider five types of absorber defects (extrusion, intrusion, oversize, undersize and center spot). The architecture of the CNN contains 4 convolutional layers (conv. layers) with a mixed size of filter,(3x3) and (5x5). The convolution stride and the spatial padding is 1 pixel for all conv. layers. Spatial pooling is carried out by 4 max-pooling layers. Two separate networks are trained for detection of the defect types and location, whereas a third algorithm combines the results. When an image is presented to the implemented algorithm and trained networks, it will return the defect type with its location. An accuracy of 99.9% on the training set and 99.3% on the test set is achieved for detection of the defect type. The network trained for location detection results in 98.7% training accuracy and 98.0% for the test set. Having a sufficient amount of training images the trained CNNs classify the types of defects and their location in the aerial image with high accuracy. The proposed method can be also applied to other defect types and simulation settings.

Paper Details

Date Published: 28 May 2018
PDF: 12 pages
Proc. SPIE 10694, Computational Optics II, 1069408 (28 May 2018); doi: 10.1117/12.2312478
Show Author Affiliations
Dereje S. Woldeamanual, Friedrich-Alexander-Univ. Erlangen-Nürnberg (Germany)
Andreas Erdmann, Friedrich-Alexander-Univ. Erlangen-Nürnberg (Germany)
Fraunhofer-Institut für Integrierte Systeme und Bauelementetechnologie IISB (Germany)
Andreas Maier, Friedrich-Alexander-Univ. Erlangen-Nürnberg (Germany)


Published in SPIE Proceedings Vol. 10694:
Computational Optics II
Daniel G. Smith; Frank Wyrowski; Andreas Erdmann, Editor(s)

© SPIE. Terms of Use
Back to Top