Silicon-based Complementary Metal Oxide Semiconductor (CMOS) image sensors are widely used for visible range detection systems. However, when it comes to near-infrared-range (NIR) applications like face recognition or Augmented/Virtual Reality (AR/VR), these sensors are much less efficient. This is due to the poor absorption of Silicon at such wavelengths. A well-known solution studied in-depth over the past few years to address this issue consists of etching diffractive structures into the Silicon. The incoming light is therefore diffracted inside the photodiode, increasing the optical path, and thus improving the Quantum Efficiency (QE) of the pixel. However, the Modulation Transfer Function (MTF) of the sensor is degraded in return on account of the increased light flux crossing from one pixel to the other, being eventually absorbed in the wrong pixel, an optical crosstalk that ends up degrading the MTF. Here, using Finite Difference Time Domain (FDTD) simulations of precisely the same Slanted Edge method as used in MTF characterization, we positively evaluate a new methodology to simulate the MTF of the sensor. We compared simulated results with characterization ones on actual pixels in several distinct configurations. We studied sensors’ MTF without any diffractive structures and others with various structures designed to influence the MTF more specifically in one direction (horizontal or vertical) at 940 nm. We demonstrated good agreements between simulations and characterizations, showing highly correlated tendencies across the whole studied set and giving us parameter predictive power on the MTF for future innovative pixel designs.
Next-generation BSI CMOS Imager Sensors are strongly driven by novel applications in depth sensing, mainly operating in the NIR (940nm) spectrum. As a result, the need for higher pixel sensitivity while shrinking pixel pitch is more present than ever. In this work, we present a new technology platform based on ad-hoc nano diffractor geometries, integrated in the Back Side of BSI CIS that allow to drastically improve the QE of the sensor for pitches varying from 10 μm down to 2.2 μm, co-optimized for both optical and electronic pixel performance.
Due to their low-cost fabrication process and high efficiency, silicon-based Complementary Metal Oxide Semiconductor (CMOS) image sensors are the reference in term of detection in the visible range. However, their optical performances are toughly degraded in the Near Infrared (NIR). For such wavelengths, Silicon has a small absorption coefficient, leading to a very poor Quantum Efficiency (QE). A solution to improve it is to implement structures like pyramids that are etched in the Silicon layer. This will lead to diffraction inside the photodiode, enhancing the light path and therefore the absorption. Using Finite Difference Time Domain (FDTD) simulations, we demonstrated a huge QE enhancement at 940nm on real pixels, by implementing this kind of diffractive structures and we finally confirmed these results by characterizations. We obtained QE values up to 47% at 940nm for our 3.2μm pixel, corresponding to a gain of 2 comparing to a pixel without any diffractive structures. We also measured the Modulation Transfer Function (MTF), to evaluate how this figure of merit is impacted by the addition of these structures. As expected, the MTF was degraded when we added these diffractive patterns but were still high looking at the values. We indeed demonstrated MTF values going up to 0.55 at Nyquist/2 frequency and 0.35 at Nyquist frequency. Looking not only at QE values but also at MTF ones, these are very promising results that could be used in many different NIR applications like face recognition, Light Detection and Ranging (LIDAR) or AR/VR.
The current CMOS image sensors market trend leads to achieve good image resolution at small package size and price,
thus CMOS image sensors roadmap is driven by pixel size reduction while maintaining good electro-optical
performances. As both diffraction and electrical effects become of greater importance, it is mandatory to have a
simulation tool able to early help process and design development of next generation pixels.
We have previously introduced and developed FDTD-based optical simulations methodologies to describe diffraction
phenomena. We recently achieved to couple them to an electrical simulation tool to take into account carrier diffusion
and precise front-end process simulation. We propose in this paper to show the advances of this methodology.
After having detailed the complete methodology, we present how we reconstruct the spectral quantum efficiency of a
pixel. This methodology requires heavy-to-compute realistic 3D modeling for each wavelength: the material optical
properties are described over the full spectral bandwidth by a multi-coefficient model, while the electrical properties are
set by the given process and design. We optically simulate the propagation of a dozen of wavelengths at normal
incidence and collect the distribution of the optical generation then we insert it in the electrical simulation tool and
collect the final output quantum efficiency.
Besides, we compare the off-axis performance evaluations of a pixel by simulating its relative illumination in a given
wavelength. In this methodology several plane waves are propagated with different angles of incidence along a specific
direction.
In this paper, we present the results of rigorous electromagnetic broadband simulations applied to CMOS image sensors
as well as experimental measurements. We firstly compare the results of 1D, 2D, and 3D broadband simulations in the
visible range (380nm-720nm) of a 1.75μm CMOS image sensor, emphasizing the limitations of 1D and 2D simulations
and the need of 3D modeling, particularly to rigorously simulate parameters like Quantum Efficiency. Then we illustrate
broadband simulations by two proposed solutions that improve the spectral response of the sensor: an antireflective
coating, and the reduction of the optical stack. We finally show that results from experimental measurements are in
agreement with the simulated results.
This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects.
Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation.
We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75&mgr;m pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.