BackgroundOptical lithography is a key technology to fabricate very large-scale integrated circuits. As the critical dimension of integrated circuits approaches the diffraction resolution limit, thick-mask effects have begun to significantly influence the lithography image quality.AimWe develop a computational lithography approach, dubbed source and polarization joint optimization (SPO), to compensate for image distortion in the thick-mask lithography process.ApproachesSPO manipulates the intensity distribution and polarization angles of the pixelated light source to modulate the diffracted light field off the photomask, thus improving the lithography image quality over the variation of process conditions. The thick-mask effects are accounted for in the imaging model using the rigorous three-dimensional diffraction simulator. The SPO framework is established to consider the image errors on both focal and defocus imaging planes with exposure variation. Two kinds of gradient-based optimization algorithms, namely, simultaneous SPO (SiSPO) and sequential SPO (SeSPO), are developed.ResultThe superiority of the proposed methods is verified by a set of numerical experiments.ConclusionThe SeSPO algorithm outperforms the SiSPO algorithm in terms of image fidelity, process window, and computational efficiency.
Digital micromirror device (DMD) based lithography system, which generates the mask pattern via a spatial light modulator, is increasingly applied in micro-nano fabrication due to its high flexibility and low cost. However, the exposure image is subject to distortion because of the optical proximity effect and the non-ideal system conditions. Correcting mask pattern with calibrated imaging model is an essential approach to improve the image fidelity of DMD-based lithography system. This paper introduces an imaging model calibration method for the DMD-based lithography testbed established by our group. The error convolution kernel and the point spread function in the imaging model are optimized using the batch gradient descent algorithm to fit a set of training data, which represent the impacts of non-ideal imaging process of the DMD-based lithography testbed. Based on the calibrated imaging model, the steepest descent algorithm is used to correct the mask pattern, thus improving the image fidelity of the testbed. Experiments demonstrate the effectiveness of the proposed model calibration method. It also shows that the size of error convolution kernel significantly influences the accuracy of the calibrated imaging model within a certain range. Finally, the effectiveness of the mask correction method is proved by experimental results.
KEYWORDS: Matrices, Image classification, Lithography, Education and training, Convolution, Machine learning, Signal processing, Deep learning, Mathematical optimization, Data modeling
BackgroundLayout classification is an important step in computational lithography approaches, such as the source-mask joint optimization, in which the representative samples are selected from each layout classification category to guide the source optimization. As an emerging machine learning method, graph convolutional network (GCN) can effectively perform the graph or image classification by defining a new propagation function to complete the convolution on the topological graph.AimWe propose a new kind of GCN model combined with the graph attention mechanism, dubbed GAM-GCN, to classify the lithography layout patterns fast and accurately.ApproachBy adding a graph attention layer, the weight coefficients of each pair of neighboring nodes are adaptively learned to improve the network performance. In addition, the model incorporates a skip connection structure to solve the over-smooth problem caused by the deep GCN model.ConclusionsCompared with some traditional deep learning methods and the GCN method, GAM-GCN obtains a significant improvement in classification accuracy while ensuring the computational efficiency.
Optical proximity correction (OPC) is regarded as one of the most important computational lithography approaches to improve the imaging performance of sub-wavelength lithography process. Traditional OPC methods are computationally intensive to pre-warp the mask pattern based on inverse optimization models. This paper develops a new kind of pixelated OPC method based on an emerging machine learning technique namely graph convolutional network (GCN) to improve the computational efficiency. In the proposed method, the target layout is raster-scanned into pixelated image, and the GCN is used to predict its corresponding OPC solution pixel by pixel. For each layout pixel, we first sub-sample its surrounding geometrical features using an incremental concentric circle sampling method. Then, these sampling points are converted into graph signals. Then, the GCN model is established to process the pre-defined graph signals and predict the central pixel within the sampling region on the OPC pattern. After that, the GCN is moved to predict the OPC solution of the next layout pixel. The proposed OPC method is validated and discussed based on a set of simulations, and is compared with traditional OPC methods.
Layout classification is an important task used in lithography simulation approaches, such as source optimization (SO), source-mask joint optimization (SMO) and so on. In order to balance the performance and time consumption of optimization, it is necessary to classify a large number of cut layouts with the same key patterns. This paper proposes a new kind of classification method for lithography layout patterns based on graph convolution network (GCN). GCN is an emerging machine learning approach that achieves impressive performance in processing graph signals with nonEuclidean topology structures. The proposed method first transforms the layout patterns into graph signals, where the sum of several adjacent layout pixels is associated with one graph vertex. Next, the adjacent graph vertices are connected by the graph edges, where the edge weights are determined by the correlations between the vertices. Therefore, the layout geometries can be represented by the function values on the graph vertices and the adjacency matrix. Subsequently, the GCN framework is established based on the graph Fourier transform, where the input is the graph signal of the layout, and the output is its classification label. The network parameters of GCN are trained in a supervised manner. The proposed method is compared to the simple convolutional neural network (CNN) with a few layers and VGG-16 network, respectively. Finally, the features of different methods are discussed in terms of classification accuracy and computational efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.