PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The trends in layout rules and the affect these rules have upon design are described. The slow improvement in resolution has created a need for litho-friendly design. The problems caused by LFD and the increase in number and complexity of rules are shown. Issues in implementing LFD are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, design for manufacturability (DfM) has become an important focus item of the semiconductor industry and many new DfM applications have arisen. Most of these applications rely heavily on the ability to model process sensitivity and here we explore the role of through-process modeling on DfM applications. Several different DfM applications are examined and their lithography model requirements analyzed. The complexities of creating through-process models are then explored and methods to ensure their accuracy presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design of integrated circuits (ICs) has been made possible by a simple contract between design and manufacturing: Manufacturing teams encapsulated their process capabilities into a set of design rules such as minimum width and spacing or overlap for each layer, and designers complied with these design rules to get a manufacturable IC. However, since the advent of 130nm technology, designers have to play by the new rules of sub-90nm technologies. The simple design rules have evolved into extremely complex, context-dependent rules. Minimum design rules have been augmented with many levels of yield-driven recommended guidelines. One of the main drivers behind these complex rules is the increase in optical proximity effects that are directly impacting systematic and parametric yields for sub-90nm designs. A design's sensitivity to optical proximity effects increases as features get smaller, however design engineers do not have visibility into the manufacturability of these features.
A genuine design for manufacturing (DFM) solution for designers should provide a fast, easy-to-use and cost-effective solution that accurately predicts the designs sensitivity to shape variations through out the design process. It should identify and reduce design sensitivity by predicting and reducing shape variations. The interface between manufacturing and design must provide designers with the right information to allow them to maximize the manufacturability of their design while shielding them from the effects of resolution enhancement technologies (RET) and manufacturing complexity. This solution should also protect the manufacturing know-how in the case of a fabless foundry flow. Currently, the interface between manufacturing and design solely relies on design rules that do not provide these capabilities.
A common proposition for design engineers in predicting shape variation is to move the entire RET/OPC/ORC into the hands of the designer. However, this approach has several major practicality issues that make it unfeasible, even as a "service" offered to designers:
1- Cost associated with replicating the flow on designer's desktop.
2- The ability of designers to understand RET/OPC and perform lithographic judgments.
3- Confidentiality of the recipes and lithographic settings, especially when working with a foundry.
4- The level of confidence the fab/foundry side has in accepting the resulting RET/OPC.
5- Runtime and data volume explosion.
6- The logistics of reflecting RET/OPC and manufacturing changes.
7- The ability to tie this capability to EDA optimization tools.
In this paper we present a new technique and methodology that overcomes these hurdles and meets both the designer and manufacturing requirements by providing a genuine DFM solution to designers. We outline a new manufacturing-to-design interface that has evolved from rule-based to model-based, and provides the required visibility to the designer on their design manufacturability. This approach is similar to other EDA approaches which have been used to successfully capture complex behavior by using a formulation that has a higher level of abstraction (for example, SPICE for transistor behavior). We will present how this unique approach uses this abstracted model to provide very accurate prediction of shape variations and at the same time, meet the runtime requirements for a smooth integration into the design flow at 90nm and below. This DFM technology enables designers to improve their design manufacturability, which reduces RET complexity, reduces mask cost and time to volume, and increases the process window and yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The annotation of electrical information or constraints is a well established method to transfer information on design intent from the electrical to the physical designer. In this paper, we will discuss the possibility to extend the concept of annotation as vehicle to hand over critical information from the physical designer to the resolution enhancement technique (RET) engineer. Opportunities and implications to extend the existing optical proximity correction (OPC) methods from the current stage of "just print the layout on wafer" towards new approaches where the layout can be optimized during the RET/OPC step based on designers input are discussed. In addition, the benefit of using process variation information for this layout optimization will be compared to a conventional OPC approach that just tries to realize an overlapping process window at one point of the process window. The power of a combination of both approaches will be shown, based on a small test case. The target of this work is to motivate further research and development in this direction to enhance the current OPC/RET capabilities towards a more integrated solution enabling annotated layout optimization as link between design and manufacturing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design for Manufacturability (DFM) has become a major semiconductor topic that spans various issues, including issues related to lithography hardware limitations, and issues related to variability. There is, however, an issue that crosses multiple DFM domains: the need to reuse designed Silicon IP blocks or "cores" across various manufacturing processes. Unfortunately, there are no standards to facilitate the reuse of circuit blocks while addressing the lithography- and variability-related issues. Specifically, there is no clear definition for a user of a core to evaluate "manufacturability" of a core for a set of foundry processes. We present a quantitative DFM standard for Silicon IP reuse, which addresses this problem. This work was done in conjunction with VSIA's DFM team.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lately, "Design for Manufacturability" (DFM) can be found in almost any self-respecting EDA vendor's top-five list of most critical and urgent strategic topics. While the envisioned DFM activities cover a broad spectrum of topics, the exact definition of DFM continues to evade capture [1]. However, it appears self-evident that an important portion of DFM hinges upon the availability of models accurately describing the pattern transfer from the layout to the wafer, here called "pattern transfer models" (PTMs). In combination with a suitable design environment, PTMs will allow physical designers to optimize their layout, thus ensuring the structural integrity over the process window upon transfer to the wafer. In this paper, we argue that PTMs have an importance comparable to that of the "electrical device models" (EDMs) widely used for circuit simulation. We point out some striking analogies between PTMs and EDMs, as far as the basic concepts and use models are concerned. Furthermore, we highlight the significant differences in the EDA land-scapes for both model types, most importantly the fact that an industry standard only exists for EDMs. Based on the consequences for EDA vendors and users, as well as manufacturing cooperations that derive from this situation, we formulate the call for an industry standard for PTMs for usage in "Optical Proximity Correction" (OPC) and DFM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we give a brief overview of a heuristic method for approximately solving a statistical digital circuit sizing problem, by reducing it to a related deterministic sizing problem that includes extra margins in each of the gate delays to account for the variation. Since the method is based on solving a deterministic sizing problem, it readily handles large-scale problems. Numerical experiments show that the resulting designs are often substantially better than one in which the variation in delay is ignored, and often quite close to the global optimum. Moreover, the designs seem to be good despite the simplicity of the statistical model (which ignores gate distribution shape, correlations, and so on).
We illustrate the method on a 32-bit Ladner-Fischer adder, with a simple resistor-capacitor (RC) delay model, and a Pelgrom model of delay variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past, complying with design rules was sufficient to ensure acceptable yields for a design. However, for sub 100nm designs, this approach tends to create patterns which cannot be reliably printed for a given optical setup, thus leading to hot-spots and systematic yield failures. The recent challenges faced by both the design and process communities call for a paradigm shift whereby circuits are constructed from a small set of lithography friendly patterns which have previously been extensively characterized and ensured to print reliably. In this paper, we describe the use of a regular design fabric for defining the underlying silicon geometries of the circuit. While the direct application of this methodology to the current ASIC design flow would result in unnecessary area and performance overhead, we overcome these penalties via a unique design flow that ensures shape-level regularity by reducing the number of required logic functions as much as possible as part of the top-down design flow. It will be shown that with a small set of Boolean functions and careful selection of lithography friendly patterns, we not only mitigate but essentially eliminate such penalties. Additionally, we discuss the benefits of using extremely regular designs constructed from a limited set of lithography friendly patterns not only to improve manufacturability but also to relax the pessimistic constraints defined by design rules. Specifically we introduce the basis for the use of "pushed-rules" for logic design as is commonly done for SRAM designs. This in turn facilitates a common OPC methodology for logic and SRAM. Moreover, by taking advantage of this newfound manufacturability and predictability of regular circuits, we will show that the performance of logic built upon regular fabrics can surpass that of seemingly more arbitrarily constructed logic.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A methodology for layout verification and optimization based on
exible design rules is provided. This methodology
is based on image parameter determined
exible design rules (FDRs), in contrast with restrictive design
rules (RDRs), and enables fine-grained optimization of designs in the yield-performance space. Conventional
design rules are developed based on experimental data obtained from design, fabrication and measurements of a
set of test structures. They are generated at early stage of a process development and used as guidelines for later
IC layouts. These design rules (DRs) serve to guarantee a high functional yield of the fabricated design. Since
small areas are preferred in integrated circuit designs due to their corresponding high speed and lower cost, most
design rules focus on minimum resolvable dimensions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Focus is one of the major sources of linewidth variation. CD variation caused by defocus is largely systematic after the layout is finished. In particular, dense lines "smile" through focus while isolated lines "frown" in typical Bossung plots. This well-defined systematic behavior of focus-dependent CD variation allows us to develop a self-compensating design methodology.
In this work, we propose a novel design methodology that allows explicit compensation of focus-dependent CD variation, either within a cell (self-compensated cells) or across cells in a critical path (self-compensated design). By creating iso and dense variants for each library cell, we can achieve designs that are more robust to focus variation. Optimization with a mixture of iso and dense cell variants is possible both for area and leakage power, with the latter providing an interesting complement to existing leakage reduction techniques such as dual-Vth. We implement both heuristic and Mixed-Integer Linear Programming (MILP) solution methods to address this optimization, and experimentally compare their results. Our results indicate that designing with a self-compensated cell library incurs ~12% area penalty and ~6% leakage increase over original layouts while compensating for focus-dependent CD variation (i.e., the design meets timing constraints across a large range of focus variation). We observe ~27% area penalty and ~7% leakage increase at the worst-case defocus condition using only single-pitch cells. The area penalty of circuits after using either the heuristic or MILP optimization approach is reduced to ~3% while maintaining timing. We also apply our optimizations to leakage, which traditionally shows very large variability due to its exponential relationship with gate CD. We conclude that a mixed iso/dense library combined with a sensitivity-based optimization approach yields much better area/timing/leakage tradeoffs than using a self-compensated cell library alone. Self-compensated design shows an average of 25% leakage reduction at the worst defocus condition for the benchmark designs that we have studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the advent of advanced process technology such as 90-nm and below, the design rules become more and more complicated than before. These complicated design rules can guarantee process margin for the most layout environments. However, some layouts have narrow process windows that were still within the design rules. For example, line end layouts in a dense environment would generally have narrower process window than that of the onedimensional (1-D) dense line environment. The dense line end spacing design rule would be larger than that of the 1-D dense line spacing to compensate for the narrow window effect. In this work, an optical simulation software was used to examine an existing 90-nm FPGA product pre-OPC layout for its optical contrast. The optical contrast could correlate to the depth of focus (DOF) process window. Several back end locations were identified with possible narrow DOF windows. From the evaluations of these low contrast patterns, several design for manufacturing (DFM) rules and DRC deck was then developed. This deck
effectively identified the narrow process window layout locations, previously found with the simulation software. These locations were then optimized for the improved DOF windows. Both simulation and in-line data showed that the DOF window was improved after the layout optimization. Product data with optimized layouts also showed the improved yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
SRAM stability has been an important topic for the high performance microprocessor industry. There are a several reasons why SRAMs are most susceptible to both process-induced variations and electrical parameter variability. Because the cache cells use devices with minimum gate lengths and widths, process variations become more severe. Sense amplifiers employ matched transistor pairs that are very sensitive to any process variation. This paper focuses on the patterning accuracy of minimum cell devices and of transistors that are meant to be matched. We used and correlated inline CD data, electrical data and lithographic simulations to measure the patterning fidelity of matched pairs. A small cache with failing matched pairs was chosen for the inline CD measurements. The measurements were done on wafers exposed on several scanners to identify their impact on matched pairs. Electrical measurements at especially designed addressable structures were done to verify the inline data. We analyzed the effect of dummy poly and varying line pitches as well as the active width impact on matched pair performance. Based on simulations, a sensitivity analysis for the analyzed layout portion to individual Zernike terms was done. Simulation results are compared with experimental data. Conclusions for the future design of matched transistor pairs and for scanner lens specifications will be given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Process/Device/Design framework called the Parametric Yield Simulator is proposed for predicting circuit variability based on circuit design and a set of characterized sources of variation. In this simulator, the aerial image of a layout is simulated across a predefined process window and resulting non-idealities in geometrical features are communicated through to circuit simulators, where circuit robustness and yield can be evaluate in terms of leakage and delay variability. The purpose of this simulator is to identify problem areas in a layout and quantify them in terms of delay and leakage in a manner in which designers and process engineers can collaborate together on an optimal solution to the problem. The Parametric Yield Simulator will serve as a launch pad for collaborative efforts between groups in different disciplines that are looking at variability and yield. Universities such as Berkeley offer a great advantage in exploring innovative approaches as different centers of key expertise exist under one roof. For example a complementary set of characterization and validation experiments has also been designed and in a collaborative study is being executed at Cypress semiconductor on a 65nm NMOS process flow. This unique opportunity of having access to a cutting edge process flow with relatively high transparency has led to a new set of experiments with contributions from six different students in circuit design, process engineering, and device physics. Collaborative efforts with the device group have also led to a new electrical linewidth metrology methodology using enhanced transistors that could prove useful for process characterization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As Technology node is advancing, we are forced to use relatively low resolution lithography tool. And these situation results in degradation of pattern fidelity. hot spot, lithographic margin-less spot, appears frequently by conventional design rule methodology. We propose two design rule methodology to manage hot spot appearances in the stage of physical pattern determination. One is restricted design rule, under which pattern variation is very limited, so hot spot generation can be fully controlled. Second is complex design rule combined with lithography compliance check (LCC) and hot spot fixing (HSF). Design rule, by itself, has a limited ability to reduce hot spot generation. To compensate the limited ability, both LCC including optical proximity correction and process simulation for detecting hot spots and HSF for fixing the detected hot spots are required. Implementing those methodology into design environment, hot spot management can be done by early stage of physical pattern determination. Also newly developed tool is introduced to help designers easily fixing hot spots. By using this tool, the system of automatic LCC and HSF has been constructed. hot spots-less physical patterns through this system can be easily obtained and turn-back from manufacture to design can be avoided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We proposed a design-friendly DFM rule intended to improve circuit performance. To reduce variations in the gate length, we applied active usage of preferred gate spaces and optimized the lithographic conditions. We selected the spaces to take into account the layouts that are used most frequently in actual design, so that many designers who are worrying about chip area and performance can follow the rule. The effect of our method was evaluated for 65-nm node technology. From the viewpoint of gate length, parallel usage of design following the rule and optimization lead to an 8% decrease in variation, and a 38% decrease in the mean difference from the targeted gate length. We also evaluated the effect on delays using an accurate method that can treat both statistical and systematic variation. The difference in the average delay from the targeted value was reduced from about 1% to less than 0.1%, and a 10% improvement in delay variation was observed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new design for manufacturability (DfM) scheme with a lithography compliance check (LCC) and hot spot fixing (HSF) flow has been developed to guarantee design compliance for OPC and RET by combining lithography simulator, hot spot detector and layout modification tool. Hot spots highlighted by the LCC flow are removed by the HSF flow following modification rule consists of "Line-Sizing" (LS) and "Space-Sizing (SS)" that are resize value of line-width and space-width for the original pattern. In order to meet layout modification requirements at the pre- and post- tape out (T.O.) stages, the priorities individually set for the modification rules and the design rules, which provides flexibly to achieve the modification scheme desirable at each stage. For handling large data at a fast speed, Layout Analyzer (LA) and Layout Optimizer (LO) engines were combined with the HSF flow. LA is used to reconstruct the original hierarchy structure, clips off small parts of the layout that include hot spots from the original layout and sends those to LO in order to reduce the computational time and resource. LO optimizes the clipped off layout following the prioritized modification- and design-rules. The new DfM scheme was found to be quite effective for hot spot cleaning for 65nm node and beyond, since it was demonstrated that the HSF flow improved the lithography margin for the metal layer of 65nm node full-chip data by reducing number of hot spots to below 0.1% of original within about 12 hours, using 1CPU of commercially available workstation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As minimum groundrules for chipmanufacturing continue to shrink the lithography process is pushed further and further into the low k1 domain. One of the key characteristics of low k1 lithography is the fact that process variations are increasingly more difficult to manage and the resulting CD variations are significant relative to the nominal dimensions. As a result it is quite common for process engineers to define process budgets, mostly dose and focus budgets. These budgets summarize the effects of various exposure and process contributors and provide the range within which the process is expected to fluctuate. An important task of process design is to ensure that within these budgets no catastrophic patterning failures occur, but even more importantly that the CD variations remain within the allowed design tolerances. Various techniques have been developed to reduce the sensitivity of the lithography process to process variations, among those one of the more prominent and quite widely adopted techniques are subresolution enhancements. Traditionally subresolution assist features are placed in the design using rules based approaches. This work presents a model based approach to assist feature placement. In this approach assist features are placed such that the resulting mask exhibits the minimum sensitivity to the specific process variations encountered. The type of process variation may be defined by the user as serious of worst case conditions, for example in dose and focus. The technique however is general enough to allow a variety of process variations to be included. This work focuses on demonstrating the key concept and show it's validity. The approach demonstrated in this work is fully integrated with the process budget concept and therefore allows a "process aware" mask optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As semiconductor manufacturing nodes march towards increasingly aggressive process nodes, the features that can be manufactured on a silicon wafer are becoming more and more constrained. These constraints are arising from the need for manufacturing process margin, the result of which is improved yields and wafer throughput. For less aggressive process nodes, these constraints have been transferred between the design and manufacturing communities using tables of design rules. However, as process nodes march forward, these are rules are getting complex and unmanageable. A better methodology to communicate design rules is to build a model of the manufacturing process for use by the design team. This model can then be used to analyze a piece of layout for manufacturing robustness, and allow the design to make informed layout revisions. Design rules encompass effects due to many manufacturing processes including exposure, registration, etch, reticle construction, electro migration, etc. In order to create useful design rules, all of these processes must be understood and combined into a set of process rules. In order to reduce the complexity of the design rules table, a process model may be applied in complex pattern configurations. This study will seek to understand the definition of complex configurations for photolithography design rules, and it will attempt to demonstrate the usefulness of model-based design rules.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for accurate quantification of all aspects of design for manufacturability using a mutually compatible set of quality-metrics and units-of-measure, is reiterated and experimentally verified. A methodology to quantify the lithography component of manufacturability is proposed and its feasibility demonstrated. Three stages of lithography manufacturability assessment are described: process window analysis on realistic integrated circuits following layout manipulations for resolution enhancement and the application of optical proximity correction, failure sensitivity analysis on simulated achievable dimensional bounds (a.k.a. variability bands), and yield risk analysis on iso-probability bands. The importance and feasibility of this technique is demonstrated by quantifying the lithography manufacturability impact of redundant contact insertion and Critical Area optimization in units that can be used to drive an overall layout optimization. The need for extensive experimental calibration and improved simulation accuracy is also highlighted.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design for Manufacturing (DFM) is being widely accepted as one of keywords in cutting edge lithography and OPC technologies. Although DFM seems to stem from designer's intensions to consider manufacturability and ultimately improve the yield, it must be well understood first by lithographers who have the responsibility of reliable printing for a given design on a wafer. Current lithographer's understanding of DFM can be thought of as a process worthy design, and the requirements set forth from this understanding needs to be well defined to a designer and fed forward as a necessary condition for a robust design. Provided that these rules are followed, a robust and process worthy design can be achieved as a result of such win-win feed-forward strategy. In this paper, we discuss a method on how to fully analyze a given design and determine whether it is process worthy, in other words DFM-worthy or not. Mask Error Enhancement Factor (MEEF), Through Focus MEEF (TF-MEEF) and Mean-To-Target (MTT) values for an initial tentative design provide good metrics to obtain a robust and process worthy design. Two remedies can be chosen as DFM solutions according to the aforementioned analysis results: modify the original design or manipulate the layout within a design tolerance during OPC. We will discuss on how to visualize the analyzed results for the robust and process worthy OPC with some relevant examples. In our discussions, however, we assumed that the robust model be being used for each design verification, and such a model derived with more physical parameters that correlates better to real exposure behavior. The DFM can be viewed as flattening the TF-MEEF across the design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the nominal gate length at the 65 nm node being only 35 nm, controlling the critical dimension (CD) in polysilicon to within a few nanometers is essential to achieve a competitive power-to-performance ratio. Gate linewidths must be controlled, not only at the chip level so that the chip performs as the circuit designers and device engineers had intended, but also at the wafer level so that more chips with the optimum power-to-performance ratio are manufactured. Achieving tight across-chip linewidth variation (ACLV) and chip mean variation (CMV) is possible only if the mask-making, lithography, and etching processes are all controlled to very tight specifications.
This paper identifies the various ACLV and CMV components, describes their root causes, and discusses a methodology to quantify them. For example, the site-to-site ACLV component is divided into systematic and random sub-components. The systematic component of the variation is attributed in part to pattern density variation across the field, and variation in exposure dose across the slit. The paper demonstrates our team's success in achieving the tight gate CD tolerances required for 65 nm technology. Certain key challenges faced, and methods employed to overcome them are described. For instance, the use of dose-compensation strategies to correct the small but systematic CD variations measured across the wafer, is described. Finally, the impact of immersion lithography on both ACLV and CMV is briefly discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have constructed a hot spot management flow for LSI manufacturing in the ultra-low k1 lithography era. This flow involves three main management steps: hot spot reduction, hot spot extraction and hot spot monitoring. Hot spot reduction works for lithography friendly restriction (RDR) and manufacturability check (MC). Hot spot extraction is carried out with consideration of short turn-around-time (TAT), accurate extraction and convenient functions such as hot spot for interlayers. Hot spot monitoring is achieved with tolerance-based verification in mask fabrication process and wafer process (lithography and etching). These technology elements were integrated into the actual LSI fabrication flow. The application of this concept to LSI manufacturing could contribute to reduction of total cost, quick TAT and ramp up to volume production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The continued downscaling of the feature sizes and pitches for each new process generation increases the challenges for obtaining sufficient process control. As the dimensions approach the limits of the lithographic capabilities, new solutions for improving the printability are required. Including the design into the optimization process significantly improves the printability. The use of litho-driven designs becomes increasingly important towards the 45 nm node. The litho-driven design is applied to the active, gate, contact and metal layers. It has been shown previously, that the impact on the chip area is negligible. Simulations have indicated a significant improvement in controlling the critical dimensions of the gate layer. In this paper, we present our first results of an experimental validation of litho-driven designs printed on an immersion scanner. In our design we use a fixed pitch approach that allows to match the illumination conditions to those for the memory structures. The impact on the chip area and on the CD control will be discussed. The resulting improvement in CD control is demonstrated experimentally by comparing the experimental results of litho-driven and standard designs. A comparison with simulations will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-rectangular transistors in today's advanced processes pose a potential problem between manufacturing and design as today's compact transistor models have only one length and one width parameter to describe the gate dimensions. The transistor model is the critical link between manufacturing and design and needs to account for across gate CD variation as corner rounding along with other 2D proximity effects become more pronounced. This is a complex problem as threshold voltage and leakage current have a very complex non-linear relationship with gate length. There have been efforts trying to model non-rectangular gates as transistors in parallel, but this approach suffers from the lack of accurate models for "slice transistors", which could potentially necessitate new circuit simulators with new sets of complex equations. This paper will propose a new approach that approximates a non-rectangular transistor with an equivalent rectangular transistor and hence does not require a new transistor model or significant changes to circuit simulators. Effective length extraction consists of breaking a non-rectangular transistor into rectangular slices and then taking a weighted average based on simulated slice currents in HSPICE. As long as a different effective length is used for delay and static power analysis, simulation results show that the equivalent rectangular transistor behaves the same as a non-rectangular transistor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The latest improvements in process-aware lithography modeling have resulted in improved simulation accuracy through the dose and focus process window. This coupled with the advancements in high speed, full chip grid-based simulation provide a powerful combination for accurate process window simulation. At the 65nm node, gate CD control becomes ever more critical so understanding the amount of CD variation through the full process window is crucial. This paper will use the aforementioned simulation capability to assess the impact of process variation on ACLV (Across-Chip Linewidth Variation) and critical failures at the 65nm node. The impact of focus, exposure, and misalignment errors in manufacturing is explored to quantify both CD control and catastrophic printing failure. It is shown that there is good correlation between predicted and experimental results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the last 2 years, the semiconductor industry has recognized the critical importance of verification for optical proximity correction (OPC) and reticle/resolution enhancement technology (RET). Consequently, RET verification usage has increased and improved dramatically. These changes are due to the arrival of new verification tools, new companies, new requirements and new awareness by product groups about the necessity of RET verification. Currently, as the 65nm device generation comes into full production and the 45nm generation starts full development, companies now have the tools and experience (i.e., long lists of previous errors to avoid) needed to perform a detailed analysis of what is required for 45nm and 65nm RET verification. In previous work [1] we performed a theoretical analysis of OPC & RET verification requirements for the 65nm and 45nm device generations and drew conclusions for the ideal verification strategy. In this paper, we extend the previous work to include actual observed verification issues and experimental results. We analyze the historical experimental issues with regard to cause, impact and optimum verification detection strategy. The results of this experimental analysis are compared to the theoretical results, with differences and agreement noted. Finally, we use theoretical and experimental results to propose an optimized RET verification strategy to meet the user requirements of 45nm development and the differing requirements of 65nm volume production.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we present a predictive model for the edge placement error (EPE) distribution of devices in standard library cells based on lithography simulations of selective test patterns. Poly-silicon linewidth variation in the sub-100nm technology nodes is a major source of transistor performance variation (e.g., Ion and Ioff) and circuit parametric yield. It has been reported that significant part of the observed variation is systematically impacted by the neighboring layout pattern within optical proximity. Design optimization should account for this variation in order to maximize the performance and manufacturability of chip designs. We focus our analysis on standard library cells. In the past the EPE characterization was done on simple line array structures. However, the real circuit contexts are much more complex. Standard library cells offer a nice balance of usability by the designers and modeling complexity. We first construct a set of canonical test structures to perform lithography simulations using various OPC parameters and under various focus and exposure conditions. We then analyze the simulated printed image and capture the layout-dependent characteristics of the EPE distribution. Subsequently, our model estimates the EPE distribution of library cells based on their layout. In contrast to a straight-forward simulation of the library cells themselves, this approach is computationally less expensive. In addition the model can be used to predict the EPE distribution of any library cells and not limited to those that are simulated. Also, since the model encapsulates the details of lithography, it is easier for designers to integrate into design flow.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Today's design flows sign-off performance and power prior to application of resolution enhancement techniques (RETs). Together with process variations, RETs can lead to substantial difference between post-layout and on-silicon performance and power. Lithography simulation enables estimation of on-silicon feature sizes at different process conditions. However, current lithography simulation tools are completely shape-based and not connected to the design in any way. This prevents designers from estimating on-silicon performance and power and consequently most chips are designed for pessimistic worst-cases. In this paper we present a novel methodology that uses the result of lithography simulation for estimation of performance and power of a design using standard device- and chip-level analysis tools. The key challenge addressed by our methodology is to transform shapes generated by lithography simulation to a form that is acceptable by standard analysis tools such that electrical properties are preserved. Our approach is sufficiently fast to be run full-chip on all layers of a large design. We observe that while the difference in power and performance estimates at post-layout and on-silicon is small at ideal process conditions, it increases
substantially at non-ideal process conditions. With our RET recipes, linewidths tend to decrease with defocus for most patterns. According to the proposed analyses of layouts litho-simulated at 100nm defocus, leakage increases by up to 68%, setup time improves by up to 14%, and dynamic power reduces by up to 2%. The key challenge addressed by our methodology is to transform shapes generated by lithography simulation to a form that is acceptable by standard analysis tools such that electrical properties are preserved. Our approach is sufficiently fast to be run full-chip on all layers of a large design. We observe that while the difference in power and performance estimates at post-layout and on-silicon is small at ideal process conditions, it increases substantially at non-ideal process conditions. With our RET recipes, linewidths tend to decrease with defocus for most patterns. According to the proposed analyses of layouts litho-simulated at 100nm defocus, leakage increases by up to 68%, setup time improves by up to 14%, and dynamic power reduces by up to 2%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Current ORC and LRC tools are not connected to design in any way. They are pure shape-based functions. A wafer-shape based power and performance signoff is desirable for RET validation as well as for "closest-to-silicon" analysis. The printed images (generated by lithography simulation) are not restricted to simple rectilinear geometries. There may be other sources of such irregularities such as Line Edge Roughness (LER). For instance, a silicon image of a transistor may not be a perfect rectangle as is assumed by all current circuit analysis tools. Existing tools and device models cannot handle complicated non-rectilinear geometries.
In this paper, we present a novel technique to model non-uniform, non-rectilinear gates as equivalent perfect rectangle gates so that they can be analyzed by SPICE-like circuit analysis tools. The effect of threshold voltage variation along the width of the device is shown to be significant and is modeled accurately. Taking this effect into account, we find the current density at every point along the device and integrate it to obtain the total current. The current thus calculated is used to obtain the effective length for the equivalent rectangular device. We show that this method is much more accurate than previously proposed approaches which neglect the location dependence of the threshold voltage.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we will demonstrate a novel approach to improve process window prediction capability. The new method, Lithography Manufacturability Check (LMC), will be shown to be capable of predicting wafer level CDs across an entire chip and the lithography process window with a CD accuracy of better than 10nm. The impact of reticle CD error on the weak points also will be discussed. The advantages of LMC for full chip process window analysis as well as the MEEF check to catch process weak points will be shown and the application to real designs will be demonstrated in this paper. LMC and MEEF checks are based on a new lithography model referred to as the Focus Exposure Matrix Model (FEM Model). Using this approach, a single model capable of simulating a complete range of focus and exposure conditions can be generated with minimal effort. Such models will be shown to achieve a predictive accuracy of less than 5nm for device patterns at nominal conditions and less than 10nm across the entire range of process conditions which define the nominal process window. Based on the inspection results of the full chip LMC check, we identify process weak points (with limited process window or excessive sensitivity to mask error) and provide feedback to the front end design stage for pattern correction to maximize the overall process window and increase production manufacturability. The performance and full function of LMC will also be described in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lithography Rule Check (LRC) becomes a necessary procedure for post OPC in 0.15μm LV and below technology in order to guarantee mask layout correctness. LRC uses a process model to simulate the mask pattern and compare its performance to the desired layout. When the results are out of specified tolerances, LRC will generate error flags as weak points to trigger further checks. This paper introduces LRC to detect the weak points even in non-OPC employed circuit layout such as 0.18μm to 0.15μm process. LRC is more important for semiconductor foundry since there are diverse design layouts and shrinks in production. This diversity leads to the possibility of problematic structures reaching the reticle. In this work, LRC is added as a necessary step in tape-out procedure for the sub 0.18μm process nodes. LRC detected weak points such as low or excessive contrast sites, high MEEF areas and small process window features, then modified the layout according to check results. Our work showed some mask related potential problems can be avoided by LRC in even non model based OPC process and therefore guarantee improved product yield.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design For Manufacturability (DFM) has emerged as a major driver as the semiconductor industry continues on its historic scaling trend. The International Technology Roadmap for Semiconductors (ITRS) Design Group has engaged in a major overhaul of the Design Technology Roadmap, including a completely new section focused on DFM. As part of that overhaul, it was observed that quantifying and road-mapping DFM requires effective yet simple models that can relate broad technology characteristics to specific circuit performances such as delay and power. In this article, we discuss the general topic of DFM roadmaps, and show a simple performance model built upon a canonical circuit and analytical solution that is parameterized such that it can address the DFM roadmap problem. We also show that for important model parameters such as threshold voltage, it may be necessary to apportion the various spaces of variability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nanotechnology can be seen to already have an impact on IC processing, no matter how it is defined. Nanomanipulators can be used for a variety of tasks in investigating nanostructures. An emerging application is the probing of individual transistors at the contact level anywhere on a die. As downscaling continues its inexorable march with the increasingly strong optical and other processing proximity effects, the ability to collect IV data from individual transistors anywhere in the circuit is becoming a valuable tool for failure analysis, yield enhancement, reliability, process integration, and time to market. The talk will discuss current capabilities and a roadmap to improve the productivity and capabilities of nanoprobing technology. In the longer term, nanotechnology's impact will not be on characterization and testing, but on processing itself. The real promise of nanotechnology is unprecedented process control of all phases of fabrication. An approach to atomically precise manufacturing will be presented that could enable the fabrication of Si or Si/Ge devices where dopant atoms can be precisely placed and the dimensions and control of those dimensions are limited only by the crystal lattice and its reconstruction due to surface or lattice strain. This fabrication technology could be used to produce ultra-scaled CMOS or advanced device technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present our recent work on using diblock copolymer directed self-assembly for the fabrication of silicon MOSFETs. Instead of using self-assembly to assemble the entire device, we plan to utilize self-assembly to perform one critical step of the complex MOSFET process flow in the beginning. Initial results of using PS-b-PMMA to define pores with hexagonal array having diameter of 20 nm for contact hole patterning will be described. Potential integration issues for making MOSFETs will also be addressed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A vertical double gate (FinFET) devices with a high Si-fin aspect ratio of height/width (H/W) = 87nm/11nm have been successfully fabricated on SOI wafers. Firstly, a 50nm-thick capping oxide layer was thermally grown upon the SOI crystalline silicon layer. Secondly, both 105nm-thick BARC and 265nm-thick photoresist were coated and a 193nm scanner lithography tool was used for the Si-fin layout patterning under high ASML exposure energy. Then, a deep sub-micron plasma etcher was used for an aggressive photoresist and BARC trimming down processing and the Si-fin capping oxide layer was subsequently plasma etched in another etching chamber without breaking the plasma etcher's loadlock vacuum. Continuously, the photoresist and BARC were removed with a plasma ashing and a RCA cleaning. Also, the patterned Si-fin capping oxide can be further trimmed down with an additional DHF cleaning and the remained ~22nm-thick capping oxide was still thick enough to act as a robust hard mask for the subsequent Si-fin plasma etching. Finally, an ultra thin Si-fin width 11nm and Si-fin height of 87nm can be successfully fabricated through the last silcon plasma etching.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a full-chip implementation of model-based process and proximity compensation. Etch corrections are applied according to a two-dimensional model. Lithography is compensated by optimizing a cost function that expresses the design intent. The cost function penalizes edge placement errors at best dose and defocus as well as displacement of the edges in response to a specified change in a process parameter. This increases immunity to bridging in low contrast areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a method that optimizes the OPC model generation process. The elements in this optimized flow include: an automated test structure layout engine; automated SEM recipe creation and data collection; and OPC model anchoring/validation software. The flow is streamlined by standardizing and automating these steps and their inputs and outputs. A major benefit of this methodology is the ability to perform multiple OPC "screening" refinement loops in a short time before embarking on final model generation. Each step of the flow is discussed in detail, as well as our multi-pass experimental design for converging on a final OPC data set. Implementation of this streamlined process flow drastically reduces the time to complete OPC modeling, and allows generation of multiple complex OPC models in a short time, resulting in faster release and transfer of a next-generation product to manufacturing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The OPC treatment of aerial mage ripples (local variations in aerial contour relative to constant target edges) is one of the growing issues with very low-k1 lithography employing hard off-axis illumination. The maxima and minima points in the aerial image, if not optimally treated within the existing model based OPC methodologies, could induce severe necking or bridging in the printed layout. The current fragmentation schemes and the subsequent site simulations are rule-based, and hence not optimized according to the aerial image profile at key points. The authors are primarily exploring more automated software methods to detect the location of the ripple peaks as well as implementing a simplified implementation strategy that is less costly. We define this to be an adaptive site placement methodology based on aerial image ripples. Recently, the phenomenon of aerial image ripples was considered within the analysis of the lithography process for cutting-edge technologies such as chromeless phase shifting masks and strong off-axis illumination approaches [3,4]. Effort is spent during the process development of conventional model-based OPC with the mere goal of locating these troublesome points. This process leads to longer development cycles and so far only partial success was reported in suppressing them (the causality of ripple occurrence has not yet fully been explored). We present here our success in the implementation of a more flexible model-based OPC solution that will dynamically locate these ripples based on the local aerial image profile nearby the features edges. This model-based dynamic tracking of ripples will cut down some time in the OPC code development phase and avoid specifying some rule-based recipes. Our implementation will include classification of the ripples bumps within one edge and the allocation of different weights in the OPC solution. This results in a new strategy of adapting site locations and OPC shifts of edge fragments to avoid any aggressive correction that may lead to increasing the ripples or propagating them to a new location. More advanced adaptation will be the ripples-aware fragmentation as a second control knob, beside the automated site placement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Perhaps the most challenging level to print moving beyond 65 nm node for logic devices is contact hole. Achieving dense to isolated pitches simultaneously in a single mask print requires high NA with novel low-k1 imaging techniques. In order to achieve the desired dense resolution, off axis illumination (OAI) techniques such as annular and quasar are necessary. This also requires incorporation of sub-resolution assist features for improved semidense to isolated contact margin. We have previously discussed design related issues revolving around asymmetric contact hole printing and misplacement associated with using extreme off axis illumination (OAI). While these techniques offer the appropriate dense margin needed, there are regions of severe asymmetric printing which are unsolvable using optical proximity correction (OPC). These regions are impossible to avoid unless design rule restrictions or new illumination schemes are implemented. We continue this work with discussions revolved around illumination choices for alleviating these regions without losing too much dense margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to complex interconnect wiring scheme and constraints from process rules, systematic defects such as pattern necking and bridging are a major concern for metal layers. These systematic defects or "weak spots" can be major yield detractors in IC manufacturing if not properly addressed. These defects can occur even in cases where model-based OPC has been implemented, as well as a variety of process rules for margin insurance. Determining how to improve the marginalities or "weak spots" becomes a key factor for enhancing product yields. This paper will address several root causes for pattern induced defects and present solutions to a variety of weak spots including "T-shape," "H-shape," "Thin-Line," and "Bowling Pin" defects during 65nm product development at TI. Through case studies, we demonstrate how to successfully provide DFM (Design for Manufacturing) by using Resolution Enhancement Techniques (RET) tools to avoid and minimize the weak spots. Furthermore, process techniques to improve printability for some of the weak spots as applied to 65nm reticle sets will be discussed. An integrated scheme aiming at optimization of design rules and process rules is proposed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In 90nm technology and beyond, process variations should be considered such that the design will be robust with respect to process variations. Focus error and exposure dose variations are the two most important lithography process variations. In a simple approximation, the critical dimension (CD) is about linearly related to the exposure dose variation, while it is quadratically related to the focus variation. Other kinds of variations can be reduced to these variations effectively as long as they are small. As a metric to measure the effects of exposure dose variations, normalized image log-slope (NILS) is pretty fast to compute once we have the aerial images. OPC software has used it as an optimization objective. But focus variation has not been commonly considered in current OPC software. One way is to compute several aerial images at different defocus conditions, but this approach is very time consuming. In this paper, we derive an analytical formula to compute the aerial image under any defocus condition. This method works for any illumination scheme and is applicable to both binary and phase shift masks (PSM). A model calibration method is also provided. It is demonstrated that there is only about 2-3x runtime increase using our fast focus-variational lithography simulation compared to the current single-focus lithography simulation. To confirm the accuracy, our model is compared with PROLITHTM. This ultra-fast simulator can enable better and faster process-variation aware OPC to make layouts more robust under process variations, and directly guide litho-aware layout optimizations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Besides models describing the exposure tool optical system, lumped parameter resist models are the other important model used during OPC. This combination is able to deliver the speed and accuracy required during OPC. Lumped parameter resist models are created by fitting a polynomial to empirical data. The parameters of this polynomial are usually image parameters (maximum and minimum intensity, slope, curvature) taken from the optical simulation for each measured structure. During calibration of such models, it is very important to pay attention to the parameter space covered by the calibration pattern used. We analyze parameter space coverage for standard calibration patterns, real layout situation post OPC correction as well as pre OPC correction. Taking this one step further, the influence of parameter space coverage during model calibration on OPC convergence is also studied.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, as the design rule shrinks so does the CD tolerance. Therefore, the importance of simulation and OPC accuracy is increasing. In the past, when pattern size was large, rule-based OPC was acceptable but as the design rule shrinks accuracy of OPC turned to model-based OPC and almost all device uses this method. Model-based OPC is based on parameter fitting it has Model-Residual-Error (MRE). Due to this error the accuracy of the model has limitations. Usually variable-threshold or vector model is applied to the model in order to cut down the MRE. But still, size of the MRE is too large compared to CD tolerance. Currently, further development of model-based OPC resulted in creation of both model and rule-based OPC. This is called Hybrid OPC method. Hybrid-OPC method is based on model OPC but MRE can be lowered using rule bias to retarget the design data. But this method makes it difficult to retarget the design data in that rule biasing result is hard to predict after the model-based OPC operation.
In this paper, we propose New Hybrid OPC method that feeds back the MRE calibrated data set to model-based OPC method. By using this method, better OPC model can be made. We will be presenting the result after the method has been applied on sub-60nm device and the capability of this method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the consequences of low-k1 lithography is the discrepancy between the intended and the printed pattern, particularly in 2-D structures. Two recent technical developments offer new tools to improve manufacturing predictability, yield and control. The first enabling development provides the ability to identify the exact locations of lithography manufacturing "hot spots" using rigorous full-chip simulation. The second enabling development provides the ability to efficiently measure and characterize these critical locations on the wafer. In this study, hot spots were identified on four critical patterned layers of a 90nm-node production process using the Brion Tachyon 1100 system by comparing the design intent GDS-II database to simulated resist contours. After review and selection, the detected critical locations were sent to the Applied Materials OPC Check system. The OPC Check system created the recipes necessary to automatically drive a VeritySEM CD SEM tool to the hot spot locations on the wafer for measurements and analysis. Using the model-predicted hot spots combined with accurate wafer metrology of critical features enabled an efficient determination of the actual process window, including process-limiting features and manufacturing lithography conditions, for qualification and control of each layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a world where Sub100nm lithography tool is an everyday household item for device makers, shrinkage of the device is at a rate that no one ever have imagined. With the shrinkage of device at such a high rate, demand placed on Optical Proximity Correction (OPC) is like never before. To meet this demand with respect to shrinkage rate of the device, more aggressive OPC tactic is involved. Aggressive OPC tactics is a must for sub 100nm lithography tech but this tactic eventually results in greater room for OPC error and complexity of the OPC data. Until now, Optical Rule Check (ORC) or Design Rule Check (DRC) was used to verify this complex OPC error. But each of these methods has its pros and cons. ORC verification of OPC data is rather accurate "process" wise but inspection of full chip device requires a lot of money (Computer , software,..) and patience (run time). DRC however has no such disadvantage, but accuracy of the verification is a total downfall "process" wise. In this study, we were able to create a new method for OPC data verification that combines the best of both ORC and DRC verification method. We created a method that inspects the biasing of the OPC data with respect to the illumination condition of the process that's involved. This new method for verification was applied to 80nm tech ISOLATION and GATE layer of the 512M DRAM device and showed accuracy equivalent to ORC inspection with run time that of DRC verification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the minimum feature size of memory devices are getting smaller, model-based OPC accuracy requirements call for highly accurate process modeling and modeling strategies. Therefore, model-based OPC verification process required high accuracy due to unexpected errors on low-k process scheme.
Model of model-based OPC verification (MBV) process has to be accurate in order to detect potential hot spot and human errors, which includes physical design rule violation, mask fabrication rule violation and DB handling errors, and has also suitable speed of fast feedback to OPC and design side in view of DFM.
Recently, model-based OPC tools have progressively advanced in term of modeling. Nevertheless, because we applied extreme off-axis illumination on sub-70nm gate levels, model can not exactly predict the wafer results and have low accuracy.
In this paper, we evaluated several commercial model-based OPC verification (MBV) tools for sub 70nm memory device and compare review results with real wafer results. With the results, we analyze and discuss the major factor for poor OPC and MBV model accuracy for low-k process. Also we will be discussing about suitable speed of feedback to OPC and design part in terms of methods for analysis and categorization of huge number of errors.
We are focus on these two goals for MBV and discuss major factors for consideration. Finally, we would like to suggest optimized procedure for OPC verification by using calibrated models on sub-70nm memory Device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A typical wiring layer of SanDisk 3-dimensional memory device includes a dense array of lines. Every other line terminates in an enlarged contact pad at the edge of the array. The pitch of the pads is twice the pitch of the dense array. When process conditions are optimized for the dense array, the gap between the pads becomes a weak point. The gap has a smaller depth of focus. As defocus increases, the space between the pads diminishes and bridges. We present a method of significantly increasing the depth of focus of the pads at the end of the dense array. By placing sub-resolution cutouts in the pads, we equalize the dominant pitch of the pads and the dense array.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Semiconductor manufacturing technologies typically include a number of processes which involve complex physical and chemical interactions. Since it is almost impossible to fully control those interactions, different processes typically have variations that can cause significant deviation of the properties of printed integrated circuit. However, if a process variation is predictable and systematic, OPC techniques can successfully be applied to compensate for those process variations by modifying the layout.
One such process variation relates to topographic variation on a wafer surface, which can cause defocusing during an optical lithography process. The nominal-focus aerial image of the layout should ideally be coincides with the wafer surface. In reality, topographic variation on the wafer surface can cause portions of the wafer's surface to be deviated from the nominal focal plane. This can result in defocused aerial image on the wafer causing line width variation of transistor gates during manufacturing process. This problem can be minimized by using anti-reflective coatings as well as differential biasing of the n-type, n-type, and field polysilicon. However, even after application of these two techniques, some residual error remains because the ARC layers are not fully absorbent. Moreover, the biasing techniques also induce process problems at the transition point between the biases and unbiased gate regions. In fact, required applied biases gradually become difficult to manage during technology node migrations.
This paper presents a system that accurately determines critical dimension layout by compensating for the effects of topography variation on the performance of an optical lithography process. In this study, a model form and its empirical calibration process have been presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In optical proximity correction, edges of polygons are segmented, and segments are independently moved to meet line-width or edge placement goals. The purpose of segmenting edges is to increase the degrees of freedom in proximity correction. Segmentation is usually performed according to predetermined, geometrical rules. Heuristic, model-based segmentation algorithms have been presented in the literature. We show that there is an optimal and unique way of segmenting polygon edges.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to satisfy high density and cost effective production, extreme illumination condition, maximum sigma and OAI, is currently implemented at low k1 process. In this condition, minimal change of optical condition results in large difference of patterning. Specifically, blurring, intensity asymmetry and tele-centricity of illumination source cause deformation of some pitch patterns and CD asymmetry of semi-isolated patterns. In conventional modeling using ideal source optical model such as top-hat shape or profile, those data are regarded as noise terms since it is difficult to fit them well and such model inaccuracy produce OPC error. This paper provided results of the OPC performance using real source optical model obtained from a scanner. Real source image was filtered and normalized for easy handling. It was shown that we improved the model accuracy and significantly reduced the number of parameters. As a result, we increased process margin for sub-60nm device.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical Proximity Correction (OPC) has become an indispensable tool used in deep sub-wavelength lithographic processes. OPC has been quite successful at reducing the linewidth dispersion across a product die, and also improving the overlapping process window of all printed features. This is achieved solely by biasing the mask features such that all print on target at the same dose. Recent advances in process window modeling, combined with highly customizable simulation and correction engines, have enabled process-aware OPC corrections. Building on these advances, the authors will describe a fast Process Window OPC (PWOPC) technique. This technique results in layouts with reduced sensitivity to defocus variations, less susceptibility to bridging and pinching failures, and greater coverage of over/underlying features (such as contact coverage by metal).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Double Exposure Technology (DET) is one of the main candidates for expanding the resolution limit of current lithography tool. But this technology has some bottleneck such as controlling the CD uniformity and overlay of both mask involved in the lithography process. One way to solve this problem and still maintain the resolution advantage of DET is using spacers. Patterning with a spacer not only expands the resolution limit but also solves the problems involved with DET. This method realizes the interconnection between the cell and peripheral region by "space spacer" instead of "line spacer" as usually used. Spacer process involves top hard mask etch, nitride spacer, oxide deposition, CMP, and nitride strip steps sequentially. Peripheral mask was additionally added to realize the interconnection region. With the use of spacers, it was possible to realize the NAND flash memory gate pattern with less than 50nm feature only using 0.85NA (ArF).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.