|
1.IntroductionPhotolithography is a critical process in modern integrated circuit manufacturing. As the technology advances, the feature size of the design pattern shrinks. Although the feature size of the design pattern is much smaller than the wavelength of the exposure light source, the optical diffraction effects and the acid diffusion effects of the chemically amplified resist become key factors in the printing fidelity. Acid diffusion and the standing wave effect occur during the exposure which degrades the edge of the patterned image. The standing wave effect is shown in Fig. 1, which is generally solved by the postexposure bake (PEB) process. However, most of the simulations with resolution enhancement techniques (RETs) did not consider acid diffusion. Although some simulations include RETs, only a Gaussian function is applied to simulate acid diffusion. Although the influence of the acid diffusion during the PEB process increases with the scaling down of the line-width, we need to look into this issue more seriously. In this paper, we consider the effect of the PEB process into the RETs. Unlike the traditional methods, the Sylvester equation is applied to simulate the PEB with an acid diffusion effect in a faster runtime than the convolution with a negligible accuracy loss (). In addition, our simulation flow uses Abbe-principal component analysis (Abbe-PCA) to get the aerial image through the optical lens system,1 which can execute source mask optimization (SMO) with a better performance of time efficiency than the Hopkins’ method. We also propose an efficient method called image intensity error with an error penalty coefficient (EPC-IIE) to calculate the edge intensity error to get a fast judgment for every candidate image. The rest of this paper is organized as follows. In Sec. 2, a new method for PEB acid diffusion based on the Sylvester equation will be proposed. Moreover, taking the proposed PEB acid diffusion solution into consideration, the RETs will be described in Sec. 3. The results are shown in Sec. 4. 2.PEB Acid Diffusion SimulationPEB process, which is a chemical reaction, is an effective method of reducing standing waves. After the photoacid from the photoresist is generated, a chemical latent image will be formed through the photoacid diffusion. For faster and more accurate PEB process simulation, the Sylvester equation is applied to model the photoacid diffusion function. 2.1.Postexposure Bake DiffusionIn the exposure process, some portions of light are not absorbed by the photoresist and reach the substrate surface which reflects light. This produces constructive and destructive interference formed by the standing wave effect. To reduce the standing wave effect, the most useful method is the PEB2 which is proposed by Walker.3 We use Fick’s second law of diffusion to describe molecular diffusion. is the concentration of species , is the diffusion coefficient of at some PEB temperature , is the PEB time, and represents different directions in different order. In general cases, it is accurate to assume that the diffusivity is independent of concentration. We can solve the diffusion equation of species under general conditions and the initial distribution. Now consider one possible initial condition known as the impulse source, which is also known as a delta function of concentration. For the boundary conditions, we assume zero concentration as approaches ; in Eq. (1). The solution of Eq. (1) is the Gaussian distribution, where is the diffusion length and denotes the distance from the reference point to original point. In a general case, we can simulate the diffusion function by using a convolution method. 2.2.Efficient Diffusion Equation Simulation MethodsNow we consider the solution of the diffusion equation in one dimension, write the equation as , and define its boundary condition as . In the time domain, we discredited it into an equally spaced grid , and let . We calculate the answer as in Eqs. (4) or (5) for each time step. Stability is the most important issue for numerical methods. If we set the wrong constraints the number will spread to infinity or be other under control states. Von Neumann stability analysis,4 also known as Fourier stability analysis, is a procedure used to check the stability of finite difference schemes applied to linear partial differential equations. Consider the time evolution of a single-Fourier mode with wave number : Factor is the amplitude of the Fourier mode at each time step. From Eq. (7), we can clearly see that the Fourier mode is stable when . 2.3.Crank–Nicolson MethodThe Crank–Nicolson method5 is a finite difference method used for numerically solving the heat equation and similar partial differential equations. Equation (8) is the scheme that we use to revisit the original diffusion equation. We performed the von Neumann stability analysis for Eq. (8), with the amplified factor . for all values of . It follows that the Crank–Nicolson scheme is unconditionally stable. The advantage of the Crank–Nicolson scheme is that not only is it unconditionally stable, but it can also be speeded up using matrix algebra. Now, we consider the solution in two dimensions. By adopting the Crank–Nicolson scheme, we can get the iteration form as in Eq. (9): We set the number of grid points in the and directions as and , respectively. However, this is not the same as the number of time steps. We write it in matrix form then use matrix algebra to speed it up. We rewrite the equation in matrix form to calculate the solution for every grid point, where and are the identity matrix with the coefficient. 2.4.Sylvester EquationRecalling Eq. (10), we rewrite our function to , where , , and , , and we need to solve . This equation is known as the Sylvester equation. We have equations to solve these unknown numbers and no promise for yielding a unique solution. A proven fact is that matrix has a unique solution if and only if the eigenvalues of and of satisfy . A faster method6 to solve this equation is rewritten in the form , where is the identity matrix and is the Kronecker product notation and the vectorization operator vec. The Sylvester equation can be seen as a linear system of dimensions . Recalling Eq. (10), matrices and are diagonally dominant, so the eigenvalues of matrices and are always larger than zero. Both of the matrices are positive definite and tridiagonal matrices. Therefore, our diffusion equation satisfies the definition of the Sylvester equation, and we can also use sparse matrix algorithms to speed up the performance or reduce the memory usage. 2.5.3-D Formulation with Sylvester EquationDuring the PEB process, the correlation between different layers needs to be considered. There are tiny variations between the layers caused by not only the exposure process but also the PEB process. First, we cut the photoresist to layers. The size of each is by . Equation (9) can be written as Eq. (11) in three-dimensions (3-D). We combine the computation of the direction into the smaller size of and matrices. For example, according to the Sylvester equation as shown in Fig. 2, the dimensions of , , , and are now , , and , . The size of matrix changes from to . Each layer of the main diagonal in Fig. 3 is a matrix as in Eq. (10). The upper block and lower block are the same size as the Layer. Every block and is a diagonal matrix with the value . As a definition, the new matrix is still a diagonally dominant matrix, so the matrix is still positive definite. We can use the Sylvester equation to solve it. The matrix is symmetric and sparse, so the space complexity is . If we are in a uniform system, it means that the diffusion coefficient is consistent in different places of the photoresist and we can save the memory usage as a constant. 3.Resolution Enhancement TechniquesWith the feature size of devices shrinking and the number of transistors on the chip increasing, image simulation on the photoresist is facing the challenge to be both fast and accurate. Therefore, many resolution enhancement techniques have recently been proposed to improve the image resolution quality and make it easy for manufacture. In this work, we use the SMO and subresolution assist features (SRAFs) based on the Abbe-PCA to fast and accurately enhance the latent image quality. 3.1.Source and Mask OptimizationMany of the RETs mentioned above are in favor of improving the printing quality individually. There are two solutions for RETs; one is to optimize the light source, another one is to optimize the mask. SMO is the RETs that take not only the mask but also the source into consideration for an overall optimization. Therefore, SMO provides a larger solution space and leads to a better result than other simple RETs. In this work, we decide the source points in the first quarter, and then map them into the remaining three quarters. Therefore, we will generate four lookup tables every time when increasing or deleting a source point in the first quarter. Initially, the image system is illuminated by the existing Abbe point sources of a given illuminator. As shown in Fig. 4, during source optimization, we first consider the candidate Abbe point sources outside the initial illuminator. Every candidate Abbe point source is considered to be added if the resultant cost function value is getting better. According to the properties of Abbe-PCA mentioned previously, Abbe-PCA kernel compaction can be performed repetitively after we add or delete any Abbe sources during the source optimization. The mask optimization we use in this work is a combination of pixel-based optical proximity correction (OPC) methods. For our pixel-model-based OPC process, we denote the mask patterns during iterative mask optimization as . As shown in Fig. 5, the notation that is the initial mask pattern, which is also the target design pattern we expect, is the candidate external pixels to be added, and is the existing internal pixels to be deleted. In our pixel-model-based OPC, the order of the searching sequence would significantly affect the final result and should be carefully decided. In the essence of optical and process proximity, nonideal effects are usually dominated by neighboring features. Therefore, we propose a breadth first search starting from the edges of the original mask pattern. The pixel-based OPC is performed gradually from the initial edges toward the insides and outsides. Moreover, the optimization masks are mostly chosen near the original mask pattern. As a result, we use conventional model-based OPC first for the rough optimization with the large pixels. In addition, we perform pixel-based OPC for detailed optimization with the small-pixels on the previous model-based results hierarchically as shown in Fig. 6. 3.2.Subresolution Assist FeaturesSRAFs are another solution for RETs. Due to the interference effect, SRAFs is highly relative to , where NA is the numerical aperture and is the wavelength of the light source. The essence of the optical system, image resolution, is determined by the smallest lens which is also called the exit pupil with its diameter of numerical aperture. SRAFs with scattering bars have been proven effective in adjusting the critical dimension of isolated lines to match that of densely packed lines, and simultaneously improving the depth of focus of isolated lines to match the dense case. In this work, we place the assist features following the rule based on Eqs. (12)–(14), where the MaxSrafNum is the number of assist features we can add to every edge, the SrafThick is the width of the assist features in our work,7 and the SrafDistance is the distance from one assist feature to another assist feature or the main feature. Under our calculation and experiment, we set SrafThick and SrafDistance as and . We generate the assist feature hit lists before we add the assist feature. The hit lists record the other rectangles that may overlap with the assist features of the tested rectangle. Taking the top edge of a tested rectangle as an example, we will inspect three blocks which will overlap with the assist features of the testing rectangle. First, we inspect the left block whose y coordinate is from HitYLow to YLow, and the coordinate is from HitXLow to XLow. Second, we inspect the right block whose coordinate is from HitYLow to YLow, and the coordinate is from XHi to HitXHi. In the end, we inspect the central block whose coordinate is from HitYLow to YLow, and the coordinate is from XLow to XHi. Notations that the XLow, XHi, YLow are the boundaries of the tested rectangle, and HitYLow, HitXHi, HitXLow are indicated in Eqs. (12)–(14). 3.3.Partial Coherent IlluminationDescribing the partial coherent illumination, the mutual intensity is applied to the image formulation. can be derived from Fourier transformation to the physical shape of illumination . The symbol in Eq. (15) is defined as the diameter ratio between the illuminator and the exit pupil, which is equivalent to the essence of the optical lens system of the lithography. 3.4.Abbe’s Image FormulationFor the partial coherent system, there are two approaches for imaging, one is Hopkins’ theory8 and another is Abbe’s method. Hopkins’ imaging requires the transmission cross-correlation (TCC) matrix of the illuminating pattern with the pupil and its complex conjugate. To generate the TCC matrix, a space complexity of is needed. Therefore, we choose Abbe’s method to formulate the image system. Abbe’s image formulation is based on approximating the illuminator into a sum of discrete point sources . We rewrite the Hopkins’ formulation as shown in Eq. (16) into a squared integration of convolution results with respect to the physical shape of the illuminator. Equation (18) shows that the image intensity can be calculated by summing the square of the light field caused by each Abbe source. 3.5.Abbe-Principal Component Analysis MethodPCA is mathematically defined as an orthonormal linear transformation. PCA transforms the data to a coordinate spanned by its eigenvectors such as the greatest eigenvalue. In addition, PCA can be used for dimensionality reduction in a data set by retaining the principal components which contain the greater eigenvalue and neglecting the nonprincipal components. However, most of its characteristics still remain. We modify Eq. (18) to Eq. (19), where is the relative intensity of each Abbe point source. Our goal is to rebuild the coherent systems by using principal eigenvectors of the image kernel space to approximate the behavior of the sum of the coherent system built by Abbe’s method.1 We describe the image system as Eq. (19). We suppose there are Abbe point sources, and we can generate kernels with size by . We can set the kernel into a matrix with dimension by . We use the singular value decomposition method, and derive it as , where is the eigenvectors of the image kernel space, is the coefficients of the point spread function linear combination, and is the singular values. We can rewrite the image system as by Eq. (20). Thus, the PCA transformation can be written as , which means the PCA space K can be spanned by a linear combination of PCA kernels in U with the weights of singular values in S. As shown in Fig. 7, using Abbe-PCA reduces the kernel size from to . 3.6.Error Penalty CoefficientTraditionally, the cost function used in RETs is measured by the edge placement error (EPE),9 which calculates the difference between the image contour and the excepted design pattern. However, EPE is time-consuming since it involves the full image simulation demanded and is hard to handle with a mathematical expression. In this work, we propose another cost function called EPC-IIE. The EPC-IIE is described in Eq. (21). Notation means the sampling function at the edges of the whole design pattern, and means the contour threshold for a constant image model. EPC-IIE is defined as generalized edge intensity error with extra two contours and image thresholds, which does not only optimize the edge intensities but also their adjacent two contours with different thresholds. The definition of the error penalty coefficient is like a piecewise linear step function, which has a different constant coefficient in every image intensity error range as illustrated in Fig. 8. 4.Simulation ResultWe contruct our system in MATLAB 2010, following the simulation flowchart shown in Fig. 9. As for our hardware environment, the CPU is Intel Core i7-2600 3.1 GHz with 16-GB RAM. Windows 7 is also applied as our operating system. We use the typical 45-nm immersion lithography10 in our work. We also use a mesh point up to 20 for the accuracy of kernels, namely the exit pupil in the spectrum domain is a circ function with a radius up to 10. In addition, we downscale the mask and kernels with a scaling factor 0.6, so that the simulation resolution is under 1.7 nm which saves runtime, and the diffusion coefficient is .11 The detailed parameters are listed in Table 1, and the thickness of the photoresist is 100 nm. Table 1Optical parameters of lithography system for 45-nm immersion.
Moreover, in the PEB process, our method demonstrates a speed up around over the convolution method for a single time step (2 s) as illustrated in Fig. 11(b). To simulate a 50-s flexible PEB diffusion process, our method only takes 250 s with a convolution set up for a working area. To complete the resist image from the exposure process, it takes less than 120 s for each iteration, with a specified iteration cycle. Figure 10 shows the original existing Abbe sources, the candidate Abbe sources and the final sources after optimized the design pattern as shown in Fig. 11(a). Figure 11(a) shows the original mask and Fig. 11(g) shows the optimized mask after our flowchart. As shown in Figs. 11(d)–11(f), there are significant differences between the aerial image and the image after PEB because both the image intensity and the contour shape of them are different. Figure 12 shows the acid concentration before and after PEB along the direction, which represents the acid concentration at the short edge of the polygon in Fig. 11(b). The scaling factor of the direction is 0.4 for the photoresist thickness, so the thickness of the photoresist is 100 nm. As shown in Fig. 12(a), we can clearly see the standing wave. The variation of the acid concentration after the PEB is shown in Fig. 12(c). We can see the acid diffuse from higher concentration () to lower concentration (). We set a value 0.3 for the image threshold to calculate the EPC-IIE with the acid diffusion effect. In Eq. (21), we realize the latent image is more similar to the original mask if the EPC-IIE is small. The average EPC-IIE for all edge points is shown in Table 2, which shows the RETs we implemented. Table 2The average image intensity error with an error penalty coefficient (EPC-IIE) for all edge points in Fig. 11.
5.ConclusionsIn this paper, we consider the acid diffusion effect in our in-house simulator. In addition, the Sylvester equation is applied to simulate the PEB acid diffusion. Therefore, when simulating the PEB acid diffusion with a 3-D Sylvester equation, the simulation time is faster than convolution method with a negligible accuracy loss (). Moreover, our result will be closer to the real shape after the processes. ReferencesJ. H.-C. ChangC. C.-P. ChenL. S. Melvin III,
“Abbe-PCA-SMO: microlithography source and mask optimization based on Abbe-PCA,”
Proc. SPIE, 7640 764026
(2010). http://dx.doi.org/10.1117/12.846615 PSISDG 0277-786X Google Scholar
C. A. Mack, Fundamental Principles of Optical Lithography: The Science of Microfabrication, John Wiley & Sons, Hong Kong
(2007). Google Scholar
E. J. Walker,
“Reduction of photoresist standing-wave effects by post-exposure bake,”
IEEE Trans. Electron Devices, 22 464
–466
(1975). http://dx.doi.org/10.1109/T-ED.1975.18162 IETDAI 0018-9383 Google Scholar
J. G. CharneyR. FjørtoftJ. von Neumann,
“Numerical integration of the barotropic vorticity equation,”
Tellus, 2 237
–254
(1950). http://dx.doi.org/10.1111/tus.1950.2.issue-4 TELLAL 0040-2826 Google Scholar
J. CrankP. Nicolson,
“A practical method for numerical evaluation of solutions of partial differential equations of the heat conduction type,”
Proc. Camb. Philos. Soc., 43 207
–226
(1947). http://dx.doi.org/10.1017/S0305004100023197 MPCPCO 0305-0041 Google Scholar
A. Jameson,
“Solution of equation by inversion of an or matrix,”
SIAM J. Appl. Math., 16
(5), 1020
–1023
(1968). http://dx.doi.org/10.1137/0116083 SMJMAP 0036-1399 Google Scholar
H. H. Hopkins,
“The concept of partial coherence in optics,”
Proc. Royal Society A, 208
(1093), 263
–277
(1952). http://dx.doi.org/10.1098/rspa.1951.0158 PRLAAZ 0080-4630 Google Scholar
N. B. Cobb,
“Fast optical and process proximity correction algorithms for integrated circuit manufacturing,”
(1998). Google Scholar
A. K. K. Wong, Optical Imaging in Projection Micro-lithography, SPIE Press, Bellingham, WA
(2005). Google Scholar
J.-B. Parket al.,
“Acid diffusion length corresponding to post exposure bake time and temperature,”
Jpn. J. Appl. Phys., 46
(1), 28
–30
(2007). http://dx.doi.org/10.1143/JJAP.46.28 JJAPA5 0021-4922 Google Scholar
BiographyPei-Chun Lin is pursuing his PhD degree at National Taiwan University. He received his BS degree in mathematics from National Cheng Kung University in 2010 and his MS degree in electrical engineering from National Taiwan University in 2012. His research interests include next-generation lithography, image processing, design for manufacturing, data processing, and the telecare system. Chun-Chang Yu is pursuing his PhD degree at National Taiwan University. He received his BS degree in mechanical engineering from National Chung Cheng University in 2004, and his MS degree in electrical engineering from National Cheng Kung University in 2006. He has also worked for Andes Technology. Charlie Chung-Ping Chen received his BS degree in computer science and information engineering from the National Chiao-Tung University, Hsinchu, Taiwan, in 1990 and his MS and PhD degrees in computer science from the University of Texas at Austin in 1996 and 1998. He is a professor in the EE Department of National Taiwan University, Taiwan. His research interests are in the areas of computer-aided design and microprocessor circuit design with an emphasis on interconnect and circuit optimization, circuit simulation, design for manufacturing, and signal/power/thermal integrity analysis and optimization. |