PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6925, including the Title Page, Copyright
information, Table of Contents, Introduction (if any), and the
Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To quantify current DfM value and future DfM potential, a mathematical framework to compare the DfM opportunity of
present and future technology nodes is derived. Parallels are drawn between the evolution of DfM and the transition
from 'soft RET' to 'hard RET'. DfM accomplishments in the current 'soft DfM' era are presented as compiled from a
DfM workshop held by IBM's technology development partners. IBM's vision of profitable CMOS scaling in the era of
'hard DfM' is presented and its core computational technology elements are discussed. Feasibility demonstrations of key
technical elements are reviewed. The paper shows that current technology nodes benefit from the emergence of
integrated DfM solutions that provide incremental yet hard to quantify yield and performance benefits but also argues
that DfM's role will continue to grow as computational scaling replaces physical scaling in the not to distant future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The difficult issues in continuing Moore's law with the lack of improvement in lithography resolution are well
known.1, 2, 3 Design rules have to change and DFM methodology has to continue to improve to enable Moore's law
scaling. This paper will discuss our approach to DFM though co-optimization across design and process. The poly
layer is used to show how rules have changed to meet patterning requirements and how co-optimization has been used
to define the poly design rules.
With the introduction and ramp of several products on our 45nm technology, we have shown our ability to meet the
goals of Moore's law scaling at high yields in volume manufacturing on a two year cycle.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new design check system that works in three steps. First, hotspots such as pinching/bridging are
recognized in a product layout based on thorough process simulations. Small layout snippets centered on hotspots are
clipped from the layout and similarities between these snippets are calculated by computing their overlapping areas. This
is accomplished using an efficient, rectangle-based algorithm. The snippet overlapping areas can be weighted by a
function derived from the optical parameters of the lithography process. Second, these hotspots are clustered using a
hierarchical clustering algorithm. Finally, each cluster is analyzed in order to identify the common cause of failure for all
the hotspots in that cluster, and its representative pattern is fed to a pattern-matching tool for detecting similar hotspots
in new design layouts. Thus, the long list of hotspots is reduced to a small number of meaningful clusters and a library of
characterized hotspot types is produced. This could lead to automated hotspot corrections that exploit the similarities of
hotspots occupying the same cluster. Such an application will be the subject of a future publication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As design rules continue to shrink beyond the lithography wavelength, pattern printability becomes a significant
challenge in fabrication for 45nm and beyond. Model-based OPC and DRC checkers have been deployed using
metrology data such as CD to fine-tune the model, and to predict and identify potential structures that may fail in a
manufacturing environment. For advanced technology nodes with tighter process windows, it is increasingly important
to validate the models with empirical data from both product and FEM wafers instead of relying solely on traditional
metrology and simulations. Furthermore, feeding the information back to designers can significantly reduce the
development efforts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present the chip-scale CMP simulator for layer uniformity analysis within Calibre DFM framework. The CMP
simulator is intended to be used during smart fill optimizations, accurate parasitic extractions, defocus variability
compensations, and other DFM applications. It is tightly integrated with Mentor Graphics DFM components for yield
analysis and optimization. The paper discusses the key concepts of the electro-chemical copper deposition and slurry
CMP models that are used in the simulation. The data flow is described, including the use of mask information from
design layout data. Application examples, including the process flow and the simulated results, are presented. Both the
electroplating and the CMP models include empirical parameters that describe the width- and space- dependency. Fast
and accurate global optimization search algorithms are implemented to find optimum modeling parameter values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For accurate analysis of circuit performance, an understanding on-chip gate length variation is required. Non-systematic
OCLV was measured by SEM and the results were analyzed after being divided into local and global factors. Simple
empirical models of global and local variations were proposed, and fitting was done. In the fitting, measured mask
variation was used, and on-chip variation of focus, dose, and LWR were fitting parameters. The fit of our model was
very consistent with experimental result. Prediction of global and local variation using lithographic characters of
patterns, such as EL, DOF, and MEEF, was enabled.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While there are several approaches being pursued to address runtime expectations in model based physical
verification with sufficient accuracy against current manufacturing processes1, there is also the need to create
models that embed a contract with the designers as to what are the realistic process control limits in a given
technology for a particular layout. This is of special importance primarily when the process is still in development so
that both design and process development can progress in parallel with a minimum risk of finding that the design
does not yield due to poor imaging control due to sub optimal layout configurations.
Several ideas are presented as to how target process variability bands can be generated and the limitations of actual
process variability bands to meet such constrains. The main problem this work tries to answer is that while Optical
Proximity Correction changes may be modified at a future time, the main source of uncertainty is determined by the
choice of the selected resolution enhancement technique. To illustrate this point a constant layout is analyzed when
applying different resolution enhancement techniques. Single exposure and double patterning results along with
their corresponding process variability bands are shown for illustration. This provides an outlook as to how feasible
is to provide target bands and if solutions may or may not exist in the future.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The impact of lithography-induced systematic variations on the parametric behavior of cells and chips designed on a TI
65nm process has been studied using software tools for silicon contour prediction, and design analysis from contours.
Using model-based litho and etch simulation at different process conditions, contours were generated for the poly and
active layers of standard cells in multiple contexts. Next, the extracted transistor-level SPICE netlists (with annotated
changes in CD) were simulated for cell delay and leakage. The silicon contours predicted by the model-based litho tools
were validated by comparing CDs of the simulated contours with SEM images. A comparative analysis of standard cells
with relaxed design rules and restricted pitch design rules showed that restrictive design rules help reduce the variation
from instance to instance of a given cell by as much as 15%, but at the expense of an area penalty. A full-chip variability
analysis flow, including model-based lithography and etch simulation, captures the systematic variability effects on
timing-critical paths and cells and allows for comparison of the variability of different cells and paths in the context of a
real design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dimensions for 32nm generation logic are expected to be ~45nm. Even with high NA scanners, the k1 factor is below 0.32. Gridded-design-rules (GDR) are a form of restricted design rules (RDR) and have a number of benefits from design through fabrication. The combination of rules and topologies can be verified during logic technology development, much as is done with memories. Topologies which have been preverified can be used to implement random logic functions with "hotspot" prevention that is virtually context-independent. Mask data preparation is simplified with less aggressive OPC, resulting in shorter fracturing, writing, and inspection times. In the wafer fab, photolithography, etch, and CMP are more controllable because of the grating-like patterns. Tela CanvasTM GDR layout was found to give smaller area cells than a conventional 2D layout style. Variability and context independence were also improved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The design challenges associated with alternating phase shifted mask lithography are discussed, solutions which had
been developed to address these challenges are reviewed, and parallels to current design for manufacturability
implementation issues are identified. Leveraging these insights, the positive attributes of a well integrated design for
manufacturability enhanced design flow are proposed. Specific topics covered in the paper are: the need to complement
error-detection with streamlined layout-correction, the risk of providing too much unstable information too early in the
design optimization flow, the efficiencies of prescriptive 'correct-by-construction' solutions, and the need for seamless
integration into existing design flows. For the benefit of the non-lithographer, the discussion of these detailed topics is
preceded by a brief review of alternating phase shifted mask lithography principles and benefits.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If the minimum die area is the main objective of an ASIC application, then each critical layer will have bi-directional
mask layout. Then advanced litho technology is required to print the layers with single exposure lithography. If however
yield, electrical robustness and variability have higher priority than minimum die area, than unidirectional patterning can
be a good alternative. However, then the bi-directional layout of, especially the active area- and gate-layer, must be
redesigned in an unidirectional layout (at the expense of a larger cell-area). Moreover, if a design split in two orthogonal
unidirectional layouts can be made then the so-called cut-mask technology can be used: this is a (well-known) double
patterning technology. This paper discusses three different cut-mask compatible redesigns of the gate-layer of a complex
flip-flop cell, to be used in robust, low-cost low-power CMOS-logic applications with 45 nm ground rules and 180 nm
device pitches. The analogue circuit simulator from Cadence has been used. The results obtained with ASML's
lithography simulator, "Litho Cruiser", show that cut-mask patterning gives superior CD- and end-of-line control and
enables Design Rules with less Gate-Overlap. This again gives the circuit-designer more design freedom for choosing
the transistor width. Furthermore, the cut-mask compatible layouts can even be processed with high-NA dry KrFlithography
instead of advanced single exposure ArFi lithography. The designs are compared with a reference design
which is a traditional minimum area design with bi-directional layout.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design rule (DR) development strategies were fairly straightforward at earlier technology nodes when node-on-node
scaling could be accommodated easily by reduction of λ/NA. For more advanced nodes, resolution enhancement
technologies such as off-axis illumination and sub-resolution assist features have become essential for achieving full
shrink entitlement, and DR restrictions must be implemented to comprehend the inherent limitations of these techniques
(e.g., forbidden pitches) and the complex and unanticipated 2D interactions that arise from having a large number of
random geometric patterns within the optical ambit.
To date, several factors have limited the extent to which 2D simulations could be used in the DR development cycle,
including exceedingly poor cycle time for optimizing OPC and SRAF placement recipes per illumination condition,
prohibitively long simulation time for characterizing the lithographic process window on large 2D layouts, and difficulty
in detecting marginal lithographic sites using simulations based on discrete cut planes. We demonstrate the utility of the
inverse lithography technology technique [1] to address these limitations in the novel context of restrictive DR
development and design for manufacturability for the 32nm node. Using this technique, the theoretically optimum OPC
and SRAF treatment for each layout are quickly and automatically generated for each candidate illumination condition,
thereby eliminating the need for complex correction and placement recipes. "Ideal" masks are generated to explore
physical limits and subsequently "Manhattanized" in accordance with mask rules to explore realistic process limits.
Lithography process window calculations are distributed across multiple compute cores, enabling rapid full-chip-level
simulation. Finally, pixel-based image evaluation enables hot-spot detection at arbitrary levels of resolution, unlike the
'cut line' approach.
We have employed the ILT technique to explore forbidden-pitch contact hole printing in random logic. Simulations
from cells placed in random context are used to evaluate the effectiveness of restricting pitches in contact hole design
rules. We demonstrate how this simulation approach may not only accelerate the design rule development cycle, but
also may enable more flexibility in design by revealing overly restrictive rules, or reduce the amount of hot-spot fixing
required later in the design phase by revealing where restrictions are needed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The growing impact of process variation on circuit performance requires statistical design approaches in which
circuits are designed and optimized subject to an estimated variation. Previous work [1] has explicitly accounted for
variation and spatial correlations by including extra margins in each of the gate delay and correlation factor between path
delays. However, as it is recently shown, what is often referred to as "spatial correlation" is an artifact of un-modeled
residuals after the decomposition of deterministic variation components across the wafer and across the die [2].
Consequently, a more accurate representation of process variability is to introduce these deterministic variability
components in the model, and therefore generate any apparent spatial correlation as the artifact of those deterministic
components, just like in the actual process. This approach is used to re-size an 8-bit Ladner-Fischer adder. The optimized
circuit delay distribution is obtained from Monte Carlo simulations. A layout generation tool is also being constructed to
incorporate the optimization procedure into the standard design flow. Custom circuit layouts are first subjected to design
rules to extract constraints that specify the margins allowed for each transistor active area edge movement. Sizing
optimization is then performed with design rule constraints taken into account. A new circuit layout is generated based
on the optimization results and checked to ensure DRC cleanness. The optimized layout can be subjected to further
verification such as hotspot detection to account for any additional layout dependant effects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Disconnection between design and manufacturing has become a prevalent issue in modern VLSI processes. As
manufacturability becomes a major concern, uncertainties from process variation and complicated rules have increased
the design cost exponentially. Numerous design methodologies for manufacturability have been proposed to improve
the yield. In deep submicron designs, optical proximity correction (OPC) and fill insertion have become indispensable
for chip fabrication. In this paper, we propose a novel method to use these manufacturing techniques to optimize the
design. We can effectively implement non-uniform wire sizing and achieve substantial performance and power
improvement with very low costs on both design and manufacturing sides. The proposed method can reduce up to 42%
power consumption without any delay penalty. It brings minor changes to the current design flow and no extra cost for
fabrication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the increased need for low power applications, designers are being forced to employ circuit optimization
methods that make tradeoffs between performance and power. In this paper, we propose a novel transistor-level
optimization method. Instead of drawing the transistor channel as a perfect rectangle, this method involves
reshaping the channel to create an optimized device that is superior in both delay and leakage to the original
device. The method exploits the unequal drive and leakage current distributions across the transistor channel to find
an optimal non-rectangular shape for the channel. In this work we apply this technique to circuit-level leakage
reduction. By replacing every transistor in a circuit with its optimally shaped counterpart, we achieve 5% savings in
leakage on average for a set of benchmark circuits, with no delay penalty. This improvement is achieved without
any additional circuit optimization iterations, and is well suited to fit into existing design flows.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Particle induced defects are still one of the major sources of yield loss in semiconductor manufacturing. In addition, optical distortion of shapes cannot be ignored in modern technologies and requires increasing design effort in order to avoid yield loss and minimize manufacturing costs. Although suppliers of automated routing tools are increasingly addressing these issues, we still see significant improvement potential even in layouts produced by routers attributed as DfM aware. We propose a post-routing clean-up step to address both defect and lithography related yield loss in the routing layers. In contrast to a "find and fix" approach, this methodology creates lithography friendly layout "by construction", based on the general concept of shape simplification and standardization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Yield loss due to process variations can be classified as catastrophic or parametric. Parametric variations can further
be random or systematic in nature. Systematic parametric variations are being projected as a major yield limiter in sub-
65nm technologies. Though several models exist to describe process-induced parametric effects in layouts, there is no
existing design methodology to study the variational (across process window) impact of all these effects simultaneously.
In this paper, we present a methodology for analyzing multiple process-induced systematic and statistical layout
dependent effects on circuit performance. We describe physical design models used to describe four major sources of
parametric variability - lithography, stress, etch and contact resistance - and their impact on device properties. We then
develop a methodology to determine variability in circuit performance based on integrating the above device models
with a circuit simulator like SPICE. A circuit simulation engine for 45nm SOI devices is implemented, which shows the
extent of the impact of layout-dependent systematic variations on circuit parameters like delay and power. Based on the
analysis, we demonstrate that all systematic effects need to be simultaneously included to match the hardware data. We
believe a flow that is capable of understanding process-induced parametric variability will have major advantages in
terms of improving physical design and yield in addition to reducing design to hardware miscorrelations and
advantages in terms of diagnosis and silicon debug.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Increasing memory array sizes and low operating voltages in modern ICs expose the IC to extremely low failure rates of
single memory cells. The failure probability is affected by variations in the process of IC fabrication, which yield varying
transistor parameters. This may cause an erratic certification of designs that may have a low production yield.
VARAN relies on analytical methods that yield controlled precision calculations involving a minimum of circuit
simulation. Furthermore, VARAN is equipped with a built-in sensitivity analysis mechanism that can guide the designer
as to which parameters are significant for robust design. The flow starts by setting a circuit simulation infrastructure.
Each simulation returns a value of fail or pass for a given set of circuit parameters (i.e. transistor size) and environmental
parameters. The probability of failure is calculated by integration over the design parameter space.
VARAN uses a novel response surface modeling (RSM) to reduce the number of simulations that are needed for low
probability calculations. The RSM relies on an adaptive fitting, which is capable of modeling intricate behaviors which
require many parameters using ordinary polynomial modeling, with a minimal number of terms. VARAN was tested over
synthetic and real circuit data yielding extremely low failure probabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Leveraging silicon validation, a model-based variability analysis has been implemented to detect sensitivity to systematic
variations in standard cell libraries using a model-based solution, to reduce performance spread at the cell level and chip
level. First, a simulation methodology to predict changes in circuit characteristics due to systematic lithography and etch
effects is described and validated in silicon. This methodology relies on these two foundations: 1) A physical shape
model predicts contours from drawn layout; 2) An electrical device model, which captures narrow width effects,
accurately reproduces drive currents of transistors based on silicon contours. The electrical model, combined with
accurate lithographic contour simulation, is used to account for systematic variations due to optical proximity effects and
to update an existing circuit netlist to give accurate delay and leakage calculations.
After a thorough validation, the contour-based simulation is used at the cell level to analyze and reduce the sensitivity of
standard cells to their layout context. Using a random context generation, the contour-based simulation is applied to each
cell of the library across multiple contexts and litho process conditions, identifying systematic shape variations due to
proximity effects and process variations and determining their impact on cell delay.
This methodology is used in the flow of cell library design to identify cells with high sensitivity to proximity effects and
consequently, large variation in delay and leakage. The contour-based circuit netlist can also be used to perform accurate
contour-based cell characterization and provide more silicon-accurate timing in the chip-design flow. A cell-variability
index (CVI) can also be derived from the cell-level analysis to provide valuable information to chip-level design
optimization tools to reduce overall variability and performance spread of integrated circuits at 65nm and below.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With increasing chip sizes and shrinking device dimensions, on-chip semiconductor process variation can no longer be
ignored in the design and signoff static timing analysis of integrated circuits. An important parameter affecting CMOS
technologies is the gate length (Lgate) of a transistor. In modern technologies, significant spatial intra-chip variability of
transistor gate lengths, which is systematic as opposed to random, can lead to relatively large variations in circuit path
delays. Spatial variations in Lgate affect circuit timing properties, which can lead to timing errors and performance loss.
To maximize performance and process utilization in microprocessor designs, we have developed and validated a timing
analysis methodology based on accurate silicon contour prediction from drawn layout and contour-based extraction of
our designs. This allows for signoff timing without unnecessarily large margins, thereby reducing chip area and
maximizing performance while ensuring chip functionality, improved process utilization and yield. In this paper, we
describe the chip timing methodology, its validation and implementation in microprocessor design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper applies process and circuit simulation to examine plausible explanations for measured differences in ring
oscillator frequencies and to develop layout and electronic circuit concepts that have increased sensitivity to
lithographic parameters. Existing 90nm ring oscillator test chip measurements are leveraged, and the performance
of ring oscillator circuit is simulated across the process parameter variation space using HSPICE and the Parametric
Yield Simulator in the Collaborative Platform for DfM. These simulation results are then correlated with measured
ring oscillator frequencies to directly extract the variation in the underlying parameter. Hypersensitive gate layouts
are created by combining the physical principles in which the effects of illumination, focus, and pattern geometry
interact. Using these principles and parametric yield simulations, structures that magnify the focus effects have been
found. For example, by using 90° phase shift probe, parameter-specific layout monitors are shown to be five times
more sensitive to focus than that of an isolated line. On the design side, NMOS or PMOS-specific electrical
circuits are designed, implemented, and simulated in HSPICE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose the novel method quantifying impacts of lithography hot spots to chip yield with lithography
simulation. Our method consists of three steps. Firstly, lithography simulation is done under several conditions
including process variations, exposure dose and focus, for example. Hot spots are recognized through the results of
simulations and those critical dimensions (CD) are derived. Secondly, a failure rate is calculated under a process
parameter at each hot spot, respectively. Assuming the distribution of wafer CD from simulated CD, a differential
failure rate on a process parameter is calculated with integrating the probability that wafer CD is less than a lower limit.
Also probability that process condition is equal to the process parameter is defined from a distribution of process
parameter. Finally, individual failure rate of the hot spot is calculated by summing the products of the differential failure
rate and the probabilities of process parameter. Systematic yield is calculating with multiplying the differences of the
individual failure rate from unity, providing that all hot spots are fully individual. An advantage of this method is that
the defect probability of hot spot is calculated independently from each other and systematic yield can be easily
estimated regardless of layout size, from primitive cell to full chip.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To ensure the continuation of the scaling of VLSI circuits for years to come, the impact of litho on performance of logic
circuits has to be understood. Using different litho options such as single or double patterning may result in different
process variations. This paper evaluates the impact of litho variations on the yield of SRAM cells. The exploration is
focused on six transistor SRAM cells (6T SRAM) which have to be printed with the highest possible density with good
yield to limit system's cost. Consequently, these cells impose the most stringent constraints on litho techniques.
An SRAM cell is yielding if it operates correctly like a memory device (functional yield) and the performance of the
cell is in spec for the chosen architecture (parametric yield). In this paper, different metrics for the stability, readability
and write-ability are used to define parametric yield. The most important litho-induced variations are illumination dose,
focus, overlay mismatch and line-edge roughness. Unwanted opens and shorts in the printed patterns caused by the
process variations will cause the cell to malfunction. These litho-induced variations also cause dimension offsets, i.e.
variations on transistors' widths and lengths, which reduces the stability, readability and write-ability of the cell, thereby
increasing parametric yield loss.
Litho simulators are coupled with a device parasitic extractor to simulate the impact of the litho offsets on the yield of
the SRAM cell. Based on these simulations guidance will be provided on the choice between different litho options.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electrically testable structures (such as serpentines for testing opens and serpentine/combs for testing shorts) with
varying post-OPC dimensions have been incorporated into test reticles, which were then used to process wafers through
electrical test. Process window OPC verification was run on the same structures, thus allowing correlation of electrical
yield to OPC-verification results. By combining OPC verification results with probability of occurrence for the various
process conditions used in OPC verification, a predicted yield can be calculated. Comparisons of electrical yield to
predicted yield are used to demonstrate a methodology for verifying (or setting) failure limits. Although, in general, the
correlation between electrical and predicted yield is reasonable, various issues have been identified which impact this
correlation, and make the task of accurately predicting yield difficult. These issues are discussed in detail in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The evolution of high-NA projection optics design and manufacturing tolerances has been remarkable in
recent years. Nevertheless, different instances of identical scanner models can still exhibit unique optical
fingerprints which can impart subtle patterning differences for a given mask exposed at nominally identical
conditions on different scanners. In some cases, a product can be shown statistically to yield lower when a
certain layer is exposed on a particular scanner. Thus it is common to have a certain subset of the total
population of tools allowed for certain critical levels, such as gate. Since a single mask is typically shared
between the multiple allowed scanners, the optical proximity correction model which is employed in the
generation of that mask must represent the average fingerprint of those tools. In practice, CD data may be
collected from multiple tools, but more often a single tool is somehow identified as the "golden" tool for
the purpose of calibrating the OPC model. Once the mask is generated, however, its printing behavior on
multiple scanners can readily be simulated using tool-specific optical models. Such models can be easily
generated based upon known optical fingerprint data, such as measured illumination source maps, Jones
pupil or Zernike aberration files. This paper investigates the use of tool-specific optical models to elucidate
the intersection of design and process variability, which will manifest differently on each scanner,
depending upon subtle details of the scanner fingerprint.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As transistor dimensions become smaller, on-wafer transistor dimension variations, induced by
lithography or etching process, impact more to the transistor parameters than those from the earlier process
technologies such as 90 nm and 130 nm. The on-wafer transistor dimension variations are layout dependent
and are ignored in the standard post layout verification flow where the transistor parameters in a spice
netlist are extracted from drawn transistor dimensions. There are commercial software tools for predicting
the on-wafer transistor dimensions for the improved accuracy of the post-layout verification. These tools
need accurate models for the on-wafer transistor dimension prediction and the models need to be
re-calibrated as the fabrication process is changed. Furthermore, the model-based predictions of the
on-wafer transistor dimensions require extensive computing power which can be time consuming.
In the paper, a procedure to back-annotate the process induced transistor dimension changes into the
post layout extracted netlist using a simple look-up table is described. The lookup table is composed of
specified drawn transistor and its sounding layout as well as their on-wafer dimensions. The on-wafer
dimensions can be extracted from simulations, SEM in-line pictures or electrical data of specially designed
testkeys. Taking the lookup table data, accordingly, the transistor dimensions in the post-layout netlist file
are then modified by a commercial software tool with a pattern search function. Comparing with the
model based approach, the lookup table approach takes much less time for modifying the post-layout netlist.
The lookup table approach is flexible, since the tables can be easily updated to reflect the most recent
process changes from the foundry.
In summary, a lookup table based approach for improving the post-layout verification accuracy is
described. This approach can improve the verification accuracy from both litho and non-litho process
variations. This approach has been applied to Xilinx's 65 nm and 45 nm product developments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
GDSII is a data format of the circuit design file for producing semiconductor. GDSII is also used as a transfer format for
fabricating photo mask as well. As design rules are getting smaller and RET (Resolution Enhancement Technology) is
getting more complicated, the time of converting GDSII to a mask data format has been increased, which influences the
period of mask production. Photo mask shops all over the world are widely using computer clusters which are connected
through a network, that is, called distributed computing method, to reduce the converting time. Commonly computing
resource for conversion is assigned based on the input file size. However, the result of experiments showed that the
input file size was improper to predict the computing resource usage. In this paper, we propose the methodology of
artificial intelligence with considering the properties of GDSII file to handle circuit design files more efficiently. The
conversion time will be optimized by controlling the hardware resource for data conversion as long as the conversion
time is predictable through analyzing the design data. Neural networks are used to predict the conversion time for this
research. In this paper, the application of neural networks for the time prediction will be discussed and experimental
results will be shown with comparing to statistical model based approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The advanced process technologies have well known yield loss due to the degradation of pattern fidelity. The
process to compensate for this problem is advanced resolution enhancement techniques (RET) and optical proximity
correction (OPC). By design, the creation of RET/OPC recipes and the calibration of process models are done very early
in the process development cycle with data that are not made of real designs since they are not yet available, but made of
test structures that represent different sizes, distances and topologies. The process of improving the RET/OPC recipes
and models is long and tedious, it is usually a key contributor to quick production ramp-up. It is very coverage limited
by design. The authors will present a proposed system that, by design, is dynamic, and allows the RET/OPC production
system to reach maturity faster through a detailed collection of hotspots identified at the design stage. The goal is to
reduce the lapse of time required to get mature production RET/OPC recipes and models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the feature size of LSI becomes smaller, the increase of mask manufacturing cost is becoming critical. Association of
Super-Advanced Electronics Technologies (ASET) started a 4-year project aiming at the reduction of mask
manufacturing cost and TAT by the optimization of MDP, mask writing, and mask inspection in 2006 under the
sponsorship of New Energy and Industrial Technology Development Organization (NEDO) [1]. In the project, the
optimization is being pursued from the viewpoints of "common data format", "pattern prioritization", "repeating
patterns", and "parallel processing" in MDP, mask writing, and mask inspection. In the total optimization, "repeating
patterns" are applied to the mask writing using character projection (CP) and efficient review in mask inspection. In this
paper, we describe a new method to find repeating patterns from OPCed layout data after fracturing. We found that using
the new method efficient extraction of repeating patterns even from OPCed layout data is possible and shot count of
mask writing decreases greatly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lithography compliance check (LCC), which is verification of layouts using lithography simulation, is an essential step
under the current low k1 lithography condition. However a conventional LCC scheme does not consider process
proximity effect (PPE) differences among several manufacturing tools, especially for exposure tools. In this paper two
concepts are proposed. One is PPE monitoring and matching using warmspots. The warmspots are patterns that have
small process window. They are sensitive to difference of illumination conditions and are basically 2-dimensional
patterns. The other is LCC using multiple simulation models that represent each PPE on exposure tools. All layouts are
verified by these models and the layouts are fixed if hotspots (catastrophic failure on wafer) are found. This verification
step is repeated until all hotspots are eliminated from the layouts. Based on these concepts, robust cell layouts that have
no hotspot under the several PPE conditions are created.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the upcoming technology generations, it will become increasingly challenging to provide a good yield and/or yield
ramp. In addition, we observe yield detractors migrating from defects via systematic effects such as litho and CMP to
out-of-spec scenarios, i.e. a slow, but continuous migration into an typical environment for analog devices. Preparing for
such scenarios, worldwide activities are ongoing to extract the device parameters not from the drawn layout, but from the
resist image or, at best, from etched contours. The litho-aware approach allows to detect devices of high variability and
to reduce the variations on the critical paths based on this analysis. We report in this paper the analysis of MOSFET
parameters from printed PC contours of standard cell libraries based on litho simulation (LfD). It will be shown how to
extract gate lengths and -widths from print images, how to backannotate the gate parameters into a litho-aware spice
netlist and to finally analyse the effect of across chip line width variations (ACLV) and process window influence based
on litho-aware spice netlist.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the Design-to-Manufacturing tape out flow, Optical Proximity Correction (OPC) is commonly adopted to correct
the systematic proximity-effects-caused patterning distortions in order to minimize the across-gate and across-chip
linewidth variation. With the continued scaling of gate length, the OPC correction scheme inevitably becomes more
aggressive nowadays; increasing the mask complexity and cost proportionally. This could partly be attributed to the
purely geometry-based OPC algorithm which tries to match every edge in the layout, without considering its actual
impact on circuit performance. Therefore, possibility exists for over-corrected OPC mask that bring slight improvement
in circuit performance at the expense of disproportionate higher cost. To simplify the mask design, we present a device
performance-based OPC (DPB-OPC) algorithm to generate the mask based on the performance matching criteria rather
than the geometrical pattern matching criteria. Drive current (Ion) and leakage current (Ioff) of transistor are chosen to
be the performance indexes in this DPB-OPC flow. When compared to the conventional OPC approaches, our proposed
approach results in simpler mask that achieves closer circuit performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design rules for logic device have been defined by technology requirement like shrink rate of chip area or process
capability of lithography and other processes. However, those rules are usually only for minimum pitches or minimum
sizes of simple layout, such as line and space patterns with enough long common run length, no intermediate corners, no
jogs and no asymmetry patterns. On the other hand, actual chip layout includes many variations of pattern which often
cause trouble in wafer manufacturing process due to their less process capability, would be found far later when the
design rules are fixed. To solve this issue, additional design rules for two-dimensional patterns, such as line-end to lineend
space, are necessary and have been applied into recent design rules. It is hard to check such many variations of
pattern by the experiment with actual wafer, so checking by lithography simulation in advance is very effective way to
estimate and fix design rules for these two dimensional patterns.
To estimate rules with accuracy, and to minimize numbers in each rule for chip area reduction, OPC and RET must be
included in the estimation, particularly for recent low-k1 lithography. However, OPC and RET are also immature in the
early development term, when design rules are necessary for designers to prepare a test mask to develop the device,
process and some parts of circuit. In other words, OPC, RET and design rules have been modified in parallel, sometime
new RET would be required to achieve a rule, sometime the design rules would be required to relax their numbers, and
sometime new design rules would be required to avoid less process capability.
In this paper, we propose the parallel development procedure for OPC, RET and design rules through the actual
development of 45nm node logic device, focused on metal layer which has many pattern variations, and show how to
build the competitive design rules by applying the latest OPC and RET technologies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design For Manufacturing (DFM) has been paid attention as the feature size on chip goes down below the k1 factor of
0.25. Lots of DFM related ideas have been come up, tried, and some of them adopted for wider process window and as a
result, higher yield. As the minimum features are getting shrunk, the design rules become more complicated, but still not
good enough to describe the complexity and limitation of certain patterns that imposes narrow process window, or even
failure of device. Thus, it becomes essential to identify, correct, or remove the litho-unfriendly patterns (more widely
called as hot spots), before OPC. One of the efforts is to write a DFM rules in addition to conventional DRC rules.
In this study, we use the software, called YAM (Yield Analysis Module) to detect hot spots on pre-OPC layouts.
Conventional DRC-based search is not able to surpass YAM, as it enables to identify hot spots in either much easier way
or even ones that are unable to be found by DRC. We have developed a sophisticated methodology to detect and fix
OPC- and/or litho-unfriendly patterns. It is confirmed to enlarge process window and the degree of freedom on OPC
work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the semiconductor feature size continues to shrink, electrical resistance issue is becoming one of the
industry's dreaded problems. In order to overcome such problem, many of the top semiconductor manufacturers have
turned there interest to copper process. Widely known, copper process is the trench first damascene process which
utilize dark tone mask instead of widely used clear tone mask. Due to unfamiliarity and under development of dark tone
mask technology compared to clear tone mask, many have reported patterning defect issues using dark tone mask.
Therefore, necessity of DFM[1] for design that meets both dark and clear tone is very large in development of copper
process based device.
In this study, we will propose a process friendly Design For Manufacturing (DFM) rule for dual tone mask.
Proposed method guides the layout rule to give same performance from both dark tone and clear tone mask from same
design layout. Our proposed method will be analyzed on photolithography process margin factors such as Depth Of
Focus (DOF) and Exposure Latitude (EL) on sub 50nm Flash memory interconnection layer.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the device's design rule has been scaled down, the needs of robust and accurate OPC (Optical Proximity Correction)
model have been increased. In order to meet the needs, we adopt the method of increasing the image parameter space
coverage, such as using SEM-contour based OPC model which provides hundreds or thousands of measurement data set
from each SEM image. It differs from traditional model calibration measurement data set from 1D or 2D symmetric test
pattern which is just one CD measurement data from one pattern.
In SEM contour-based model OPC, it is important that what kinds of patterns are chosen for model calibration and how
the SEM image contours are extracted to improve model accuracy.
In this paper, we selected the SEM images for SEM contour modeling analyzing aerial image intensity variation. As
finding optically sensitive patterns, we could make robust and accurate OPC model across the process window. In this
SEM-contour based OPC modeling, we applied the method from commercial SEM company.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We constructed hot spot management flow with a die-to-database inspection system that is required for both hot
spot extraction accuracy and short development turn-around-time (TAT) in low k1 lithography. The die-to-database
inspection system, NGR-2100, has remarkable features for the full chip inspection within reasonable operating time.
The system provided higher hot spot extraction accuracy than the conventional optical inspection tool. Also, hot spots
extracted by the system could cover all killer hot spots extracted by electrical and physical analysis. In addition, the new
hot spot extraction methodology employing the die-to-database inspection system is highly advantageous in that it
shortens development TAT by two to four months. In the application to 65nm node CMOS, we verified yield
improvement with the new hot spot management flow. Also, the die-to-database inspection system demonstrated
excellent interlayer hot spot extraction from the viewpoint of LSI fabrication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lithography simulation has proven to be a technical enabler to shorten development cycle time and provide direction
before next-generation exposure tools and processes are available. At the early stages of design rule definition for a new
technology node, small critical areas of layout are of concern, and optical proximity correction (OPC) is required to
allow full exploration of the 2D rule space. In this paper, we demonstrate the utility of fast, resist-model-based, OPC
correction to explore process options and optimize 2D layout rules for advanced technologies. Unlike conventional OPC
models that rely on extensive empirical CD-SEM measurements of real wafers, the resist-based OPC model for the
correction is generated using measured bulk parameters of the photoresist such as dissolution rate. The model therefore
provides extremely accurate analysis capability well in advance of access to advanced exposure tools. We apply this
'virtual patterning' approach to refine lithography tool settings and OPC strategies for a collection of 32-nm-node layout
clips. Different OPC decorations including line biasing, serifs, and assist features, are investigated as a function of NA
and illumination conditions using script-based optimization sequences. Best process conditions are identified based on
optimal process window for a given set of random layouts. Simulation results, including resist profile and CD process
window, are validated by comparison to wafer images generated on an older-generation exposure tool. The ability to
quickly optimize OPC as a function of illumination setting in a single simulation package allows determination of
optimum illumination source for random layouts faster and more accurately than what has been achievable in the past.
This approach greatly accelerates design rule determination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new Robust Process Window Qualification (PWQ) Technique to perform
systematic defect characterization to enlarge the Lithographic process window is
described, using a Die-to-Database Verification Tool (NGR2100).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the semiconductor industry moves to the 45nm node and beyond, the tolerable
lithography process window significantly shrinks due to the combined use of high NA
and low k1 factor. This is exacerbated by the fact that the usable depth of focus at 45nm
node for critical layer is 200nm or less. Traditional Optical Proximity Correction (OPC)
only computes the optimal pattern layout to optimize its patterning at nominal process
condition (nominal defocus and nominal exposure dose) according to an OPC model
calibrated at this nominal condition, and this may put the post-OPC layout at nonnegligible
patterning risk due to the inevitable process variation (defocus and dose
variations). With a little sacrifice at the nominal condition, process variation aware OPC
can greatly enhance the robustness of post-OPC layout patterning in the presence of
defocus and dose variation. There is also an increasing demand for through process
window lithography verification for post-OPC circuit layout. The corner stone for
successful process variation aware OPC and lithography verification is an accurately
calibrated continuous process window model which is a continuous function of defocus
and dose. This calibrated model needs to be able to interpolate and extrapolate in the
usable process window. Based on Synopsys' OPC modeling software package ProGen,
we developed and implemented a novel methodology for continuous process window
(PW) model, which has two continuous adjustable process parameters: defocus and dose.
The calibration of this continuous PW model was performed in a single calibration
process using silicon measurement at nominal condition and off-focus-off-dose
conditions which are sparsely sampled within the measured entire focus exposure matrix
(FEM). The silicon data at the off-focus-off-dose conditions not used for model
calibration was utilized to validate the accuracy and stability of PW model during model
interpolation and extrapolation. We demonstrated this novel continuous PW modeling
approach can achieve very good performance both at nominal condition and at
interpolated or extrapolated off-focus-off-dose conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Timely process characterization is crucial in Design for Manufacturing. Scatterometry as a powerful metrology tool can
be extended for optical system characterization. In this paper, we show how scatterometry can be used in conjunction
with an array of dual-pitch or dual-bar gratings to measure optical aberrations. Multiple pattern designs are presented
and compared. A linear model is used to describe the relation between aberration and measurable quantities. Firstprinciple
simulation results shows the current approach can simultaneously measure various Zernike coefficients with
accuracy ~2 mλ.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As a preliminary step towards Model-Based Process Window OPC we have analyzed the
impact of correcting post-OPC layouts using rules based methods. Image processing on
the Brion Tachyon was used to identify sites where the OPC model/recipe failed to
generate an acceptable solution. A set of rules for 65nm active and poly were generated
by classifying these failure sites. The rules were based upon segment runlengths, figure
spaces, and adjacent figure widths. 2.1 million sites for active were corrected in a small
chip (comparing the pre and post rules based operations), and 59 million were found at
poly. Tachyon analysis of the final reticle layout found weak margin sites distinct from
those sites repaired by rules-based corrections. For the active layer more than 75% of the
sites corrected by rules would have printed without a defect indicating that most rulesbased
cleanups degrade the lithographic pattern. Some sites were missed by the rules
based cleanups due to either bugs in the DRC software or gaps in the rules table. In the
end dramatic changes to the reticle prevented catastrophic lithography errors, but this
method is far too blunt. A more subtle model-based procedure is needed changing only
those sites which have unsatisfactory lithographic margin.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the past several years, choosing the best Resolution Enhancement Technique (RET) has become more and
more difficult. The RET implementation team is faced with an ever increasing number of variables to attempt to
optimize. Also, for a given node, there are now more layers designated as critical pattern layers requiring RET. As
design rules become more aggressive, and scanners have more process parameters such as polarization and focus
drilling, RET must be optimized across a larger number of variables than before. Sorting through the best combination of
all of the available process parameters could potentially require the number of wafer experiments to increase
exponentially. Rigorous, physics-based computational lithography is the perfect tool for executing the large number of
experiments, virtually, culling out dramatically the actual number of physical wafer experiments required for
verification. Ideally, first pass RET selection needs to be made as early as possible in the technology cycle, well before
the equipment is available. Traditional OPC tools, which require wafer process data to set up are not suited to this task,
as they can only be used after the equipment has been installed and a stable, established process exists. Rigorous physical
and chemical models, such as those found in PROLITH, are better suited to early RET selection and optimization but the
Windows platform, where PROLITH is used, is computationally too slow for the massive number of calculations
required. In this study, we focus on the RET selection process for a set of "typical" critical test patterns, using KLATencor's
other rigorous, physics-based computational lithography tool, LithoWare. LithoWare combines the accuracy of
rigorous physical and chemical models with the computational power of distributed computing on Linux. We examine
the use of cluster computing in optimizing the illuminators using model based OPC and process window analysis for
critical contact hole (CH) patterns. We use the results to propose a comprehensive RET selection strategy to meet the
user requirements of 45nm and 32 nm development.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The 22nm logic technology node with dimensions of ~32nm will be the first node to require some form of pitch-halving.
A unique combination of a Producer APF(R)-based process sequence and GDR-based design style permits
implementation of random logic functions with regular layout patterns. The APF (Advanced Patterning Film) pitch-halving
approach is a classic Self-Aligned Double Patterning scheme (SADP) [1,2,3,4] which involves the creation of
CVD dielectric spacers on an APF sacrificial template and using the spacers as a hardmask for line frequency doubling.
The Tela CanvaTM implements Gridded Design Rules (GDR) using straight lines placed on a regular grid. Logic
functions can be implemented using lines on a half-pitch with gaps at selected locations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As design rules shrink, the goal for model-based OPC/RET schemes is to minimize the discrepancy between the
intended pattern and the printed pattern, particularly among 2d structures. Errors in the OPC design often result from
insufficient model calibration across the parameter space of the imaging system and the focus-exposure process
window. Full-chip simulations can enable early detection of hotspots caused by OPC/RET errors, but often these OPC
model simulations have calibration limitations that result in undetected critical hotspots which limit the process window
and yield. Also, as manufacturing processes are improved to drive yield enhancement, and are transferred to new
facilities, the lithography tools and processes may differ from the original process used for OPC/RET model calibration
conditions, potentially creating new types of hotspots in the patterned layer.
In this work, we examine the predictive performance of rigorous physics-based 193 nm resist models in terms of
portability and extrapolative accuracy. To test portability, the performance of a physical model calibrated using 1d data
from a development facility will be quantified using 1d and 2d hotspot data generated at a different manufacturing
facility with a production attenuated-PSM lithography process at k1 < 0.4. To test extrapolative accuracy, a similar test
will be conducted using data generated at the manufacturing facility with illumination conditions which differ
significantly from the original calibration conditions. Simulations of post-OPC process windows will be used to
demonstrate application of calibrated physics-based resist models in hotspot characterization and mitigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As technologies scale, the impact of process variations to circuit performance and power consumption is increasingly
significant. In order to improve the efficiency of statistical circuit optimization, a better understanding of the relationship
between circuit variability and process variation is needed. Our work proposes a hierarchical variability model, which
addresses both systematic and random variations at wafer, field, die, and device level, and spatial correlation artifacts are
captured implicitly. Finally, layout dependent effects are incorporated as an additive component. The model is verified
by applying to 90nm ring oscillator measurement data and can be used for variability prediction and optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At low k1 lithography and strong off-axis illumination, it is very hard to achieve edge-placement tolerances and 2-D
image fidelity requirements for some layout configurations. Quite often these layouts are within simple design rules
constraint for a given technology node. Evidently it is important to have these layouts included during early RET flow
development. Simple shrinkage from previous technology node is quite common, although often not enough. For logic
designs, it is hard to control design styles. Moreover for engineers in fabless design groups, it is difficult to assess the
manufacturability of their layouts because of the lack of understanding of the litho process.
Assist features (AF) are frequently placed according to pre-determined rules to improve lithography process window.
These rules are usually derived from lithographic models. Direct validation of AF rules is required at development
phase.To ensure good printability through process window, process aware optical proximity correction (OPC) recipes
were developed. Generally rules based correction is performed before model based correction. Furthermore, there are also
lots of other options and parameters in OPC recipes for an advanced technology, thus making it difficult to holistically
optimize performance of recipe bearing all these variables in mind.
In this paper we demonstrate the application of layout DOE in RET flow development. Layout pattern libraries are
generated using the Synopsys Test Pattern Generator (STPG), which is embedded in a layout tool (ICWB). Assessment
gauges are generated together with patterns for quick correction accuracy assessment. OPC verification through full
process is also deployed. Several groups of test pattern libraries for different applications are developed, ranging from
simple 1D pattern for process capability study and settings of process aware parameters to a full set of patterns for the
assessment of rules based correction, line end and corner interaction, active and poly interaction, and critical patterns for
contact coverage, etc.
Restrictive design rules (RDR) are commonly deployed to eliminate problematic layouts. We demonstrate RDR
evaluation and validation using our layout design of experiments (DOE) approach. This technique of layout DOE also
offers a simple and yet effective way to verify AF placement rules. For a given nominal layout features all possible assist
features are generated within the mask rules constraint using STPG. Then we run OPC correction and assess main feature
critical dimension (CD) at best and worst process condition in ICWB. Best assist feature placement rules are derived based
on minimum CD difference. The rules derived from this approach are not the same as those derived from the commonly
used method of least intensity variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The impact of gate line edge roughness (LER) on the performance variability of 32nm double-gate (DG) FinFETs is
investigated using a framework that links device performance to commonly used LER descriptors, namely correlation
length (ξ), RMS amplitude or standard deviation (σ) of the line edge from its mean value, and roughness exponent (α).
This modeling approach is more efficient than Monte-Carlo TCAD simulations, and provides comparable results with
appropriately selected input parameters. The FinFET device architecture is found to be robust to gate LER effects.
Additionally, a spacer-defined gate electrode provides for dramatically reduced variability in device performance
compared to a resist-defined gate electrode, which indicates that gate-length mismatch contributes more to variability in
performance than lateral offset between the front and the back gate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As photomask critical dimensions shrink significantly below the exposure wavelength and the angle of off-axis
illumination increases, the use of Kirchhoff thin mask approximation cannot capture diffraction and polarization effects
that occur at a topographical mask surface. Such approximation errors result in inaccurate models that lead to poor
prediction for image simulation, which can waste time and money during lithographic process development cycle. The
real effects of a thick mask can be simulated using finite difference time domain (FDTD) electromagnetic (EM) field
calculations, or be better approximated with less error using such techniques such as boundary layer or various Fourier
transformation techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Litho-aware design methodology is the key to enable the aggressive scaling down to the future technology node.
Boundary based methodology for cellwise OPC has been proposed to account for influence from features of neighboring
cells. As technology advances toward 32 and 22 nm, more columns of features are needed as representative
environments for the boundary-based cellwise OPC. In this paper, we propose a new method that combines the fill
insertion and boundary-based cellwise OPC to reduce the mask data size as well as the prohibitive runtime of full-chip
OPC, making cell characterization more predictable. To make the number of cell OPC solutions easy to handle, we
present a new methodology which uses dummy fill insertion both inside and outside cells to solve the issue for
technologies beyond 45 nm. Experimental results show a solid 30% improvement on average and maximum edge
placement errors (EPE) over the previous work.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Aerial image simulation of interdigitated sidewall capacitor layouts and extraction of feature changes are used to
estimate the parametric performance spread of DC Metal-Oxide-Metal (MOM) mixed signal capacitors as a function of
the normalized lithographic resolution k1. Since minimum feature sizes are utilized, the variation of MOM capacitors is
attributed to lithography spacing. In this paper, k1 of 0.8, 0.56, 0.40, and 0.28 are studied. The DC capacitance shows a
worst-case variability of 42%. While line-end-shortening is a small fractional change in finger length and proves to be
not a critical factor in variability, spacing width proves to be the main source of the variability in DC capacitance.
Different annular illumination settings are explored for mitigating the variability in spacing width. Co-design of the
pitch and illumination shows that for each k1, there is an optimal annular illumination radius. The optimal set of sigmas
(i.e. sigma_in and sigma_out) can control the variability between linewidths and spacing widths to 20%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Double patterning technology (DPT) is one of the most practical candidate technologies for 45nm half-pitch or beyond
while conventional single exposure (SE) is still dominant with hyper NA avoiding DPT difficulties such as split-conflict
or overlay issue. However small target dimension with hyper NA and strong illumination causes OPC difficulty and
small latitude of lithography and photomask fabricated with much tight specification are required for SE. Then there
must be double patterning (DP) approach even for SE available resolution.
In this paper DP for SE available resolution is evaluated on lithography performance, pattern decomposition, photomask
fabrication and inspection load.
DP includes pattern pitch doubled of SE, then lithography condition such as mask error enhancement factor (MEEF) is
less impacted and the lower MEEF means less tight specification for photomask fabrication.
By using Synopsys DPT software, there are no software-induced conflicts and stitching is treated to be less impact. And
also this software detects split-conflicts such as triangle or square placement from contact spacing.
For estimating photomask inspection load, programmed defect pattern and circuit pattern on binary mask are prepared.
Smaller MEEF leads less impact to defect printing which is confirmed with AIMS evaluation. As an inspection result,
there are few differences of defect sensitivity for only dense features and also few differences of false defect counts
between SE and DP with less NA. But if higher NA used, DP's inspection sensitivity is able to be lowered Then
inspection load for DP would be lighter than SE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During early stage of a memory device development, photolithography engineer provide
a lithography friendly layout to a designer and assist in development of design rule.
Most of the cases, lithographer relies on the accuracy of lithography simulator to
generate some guidelines and/or modifications to a designer which may be sufficient for
a cell only design. Even for such a cell only designs, it is increasingly difficult to
perform such task due to shrinkage of chip design. For some random pattern design
contained in a core and periphery regions, a more rapid method of evaluating the layout
is needed. In order to perform a fast evaluation, a calibrated proximity model is
needed. If a calibration data is available, a layout can be OPCed and verified to detect
weak spots. On the other hand, a calibration data may not be available during early
design stage. In this paper, a method of obtaining lithography model without the need
of calibration data is presented. First, an illumination source optimization is
performed on the specific patterns to minimize the effect of critical dimension variation.
Using the illumination condition obtained, an optical model is used to determine the
first level layout weak spots which are most critical to a specific layer type based on the
image quality analysis. At this point, one may choose to perform OPC using the
optical model and analyze the process margin. A further interest is on whether if a
particular model can by-pass the need for OPC layout in verifying the layout.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Delays in equipment availability for both Extreme UV and High index immersion have led to a growing
interest in double patterning as a suitable solution for the 22nm logic node. Double patterning involves
decomposing a layout into two masking layers that are printed and etched separately so as to provide the
intrinsic manufacturability of a previous lithography node with the pitch reduction of a more aggressive
node. Most 2D designs cannot be blindly shrunk to run automatically on a double patterning process and so
a set of guidelines for how to layout for this type of flow is needed by designers. While certain classes of
layout can be clearly identified and avoided based on short range interactions, compliance issues can also
extend over large areas of the design and are hard to recognize. This means certain design practices should
be implemented to provide suitable breaks or performed with layout tools that are double patterning
compliance aware. The most striking set of compliance errors result in layout on one of the masks that is at
the minimum design space rather than the relaxed space intended. Another equally important class of
compliance errors is that related to marginal printability, be it poor wafer overlap and/or poor process
window (depth of focus, dose latitude, MEEF, overlay). When decomposing a layout the tool is often
presented with multiple options for where to cut the design thereby defining an area of overlap between the
different printed layers. While these overlap areas can have markedly different topologies (for instance the
overlap may occur on a straight edge or at a right angled corner), quantifying the quality of a given overlap
ensures that more robust decomposition solutions can be chosen over less robust solutions. Layouts which
cannot be decomposed or which can only be decomposed with poor manufacturability need to be
highlighted to the designer, ideally with indications on how best to resolve this issue. This paper uses an
internally developed automated double pattern decomposition tool to investigate design compliance and
describes a number of classes of non-conforming layout. Tool results then provide help to the designer to
achieve robust design compliant layout.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Design rules and the design rule check (DRC) utility are conventional approaches to design for manufacturability
(DFM). The DRC utility is based on unsophisticated rules to check the design layout in a simple environment. As the
design dimension shrinks drastically, the introduction of a more powerful DFM utility with model-based layout
patterning check (LPC) becomes mandatory for designers to filter process weak-points before taping out layouts. In this
paper, a system of integrated hotspot scores consisting of three lithography sensitive indexes is proposed to assist
designers to circumvent risky layout patterns in lithography. With the hotspot fixing guideline and the hotspot severity
classification deduced from the scoring system provided in this paper, designers can deliver much more manufacturable
designs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is well known that as design rule shrink advanced techniques are required in-order to precisely and controllably print
the design intent on a wafer. The commonly used techniques to overcome the resolution limit are OPC and RET. The
goal of these techniques is to compensate for an expected local interaction between the light, mask pattern and photoresist,
which will otherwise result in a mismatch between the printed pattern and design intent and lead to fatal yield
failures. It is this interaction which dominates the extensive time-consuming mask qualification fabs are required to
perform before a new mask on a new product can be inserted into a production line.
In this paper, a new approach and Litho Qualification Monitor (LQM) system, implemented in Qimonda Dresden fab,
for ultra fast pattern failure classification based on design information (Design Based Binning), coupled with an
automatic interface to SEM metrology tool will be presented. The system centralizes all the operations required for the
identification and analysis of marginally-printed systematic structures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An automatic system that combines actions in both the image domain as well as in the layout-database domain for
accurate mask-defect analysis and application of design criticality will be presented. In this paper we will emphasize the
qualification and calibration of the system and its various pieces of functionality with the use of programmed defect
masks and low-voltage mask CD-SEM measurement data. Results on 1D and 2D programmed defects of various natures
are reported in dense layout as well as in real memory design layout. The results show that the system can accurately
extract mask CD-errors and defect sizes at a resolution far below that of the pixel-size of state-of-the-art mask-defect
inspection tools at nanometer resolution.
We will further demonstrate that mask-defect-inspection data can contain optical anomalies when defect or residual
feature sizes are smaller than the inspection wavelength. Mask inspection images then no longer show the real defect.
These anomalies can be analyzed with the system using advanced image actions.
Finally, we will demonstrate the capability to calculate the effects that defects have on final wafer printability even
without the need for input layout. Hence, model-based defect properties can be combined with rule-based defect
properties as well as multi-layer, design-based criticality-region properties for utter flexibility in defect disposition
capability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Existing optical proximity correction tools aim at minimizing edge placement errors (EPE) due to the optical and resist
process by moving mask edges. However, in low-k1 lithography, especially at 45nm and beyond, printing perfect
polygons is practically impossible to achieve in addition to incurring prohibitively high mask complexity and cost. Given
the impossibility of perfect printing, we argue that aiming to reduce the error of electrical discrepancy between the ideal
and the printed contours is a more reasonable strategy. In fact, we show that contours with non-minimal EPE may result
in closer match to the desired electrical performance.
Towards achieving this objective, we developed a new electrically driven OPC (ED-OPC) algorithm. The tool combines
lithography simulation with an accurate contour-based model of shape electrical behavior to predict the on/off current
through a transistor gate. The algorithm then guides edge movements to minimize the error in current, rather than in
edge placement, between current values for printed and target shapes. The results on industrial 45nm SOI layouts using
high-NA immersion lithography models show up to a 5% improvement in accuracy of timing over conventional OPC,
while at the same time showing up to 50% reduction in mask complexity for gate regions. The results confirm that better
timing accuracy can be achieved despite larger edge placement error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.