PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9022 including the Title Page, Copyright information, Table of Contents, Introduction, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less
streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and
bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a
streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator,
and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to
provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is
designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x
128(Memory length) pixels with the pixel pitch of 22.4um. .
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we demonstrate the technologies related to the pixel structure achieving the fully charge transfer time of less than 10 nsec for the 20M frame per second burst CMOS image sensor. In this image sensor, the size of the photodiode (PD) is 30.0 μmH x 21.3 μmV in the 32.0 μmH x 32.0 μmV pixel. In the pixel, the floating diffusion (FD) and the transfer-gate-electrode (TG) are placed at the bottom center of the PD. The n-layer for the PD consists of the semicircular regions centered on the FD and the sector-shaped portions extending from the edges of the semicircular regions. To generate an electric field greater than the average of 400 V/cm toward the FD direction in the entire PD region, the n-layer width of the sector-shaped portions becomes narrower from the proximal-end to the distal-end. By using the PD structure, which includes the above mentioned n-layer shape and the PD dopant profile with the condition of three times n-type dopant implantation, we achieved to collect 96 % of the charges generated in the PD at the FD within 10 nsec. An ultra-high speed CMOS image sensor with the abovementioned pixel structure has been fabricated. Through the experiments, we confirmed three key characteristics as follows; the image lag was below the measurement limit, the electron transit time in the PD was less than 10 nsec, and the entire PD region had equivalent sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-delay integration (TDI) is a popular imaging technique that is used in many applications such as machine vision,
dental scanning and satellite earth observation. One of the main advantages of using TDI imagers is the increased
effective integration time that is achieved while maintaining high frame-rates. Another use for TDI imagers is with
moving objects, such as the earth's surface or industrial machine vision applications, where integration time is limited in
order to avoid motion blurs. Such technique may even find its way in mobile and consumer based imaging applications
where the reduction in pixel size can limit the performance during low-light and high speed applications. Until recently,
TDI was only used with charge-coupled devices (CCDs) mainly due to their charge transfer characteristics. CCDs
however, are power consuming and slow when compared to CMOS technology and are no longer favorable for mobile
applications. In this work, we report on novel single-photon counting based TDI technique that is implemented in
standard CMOS technology allowing for complete camera-on-a-chip solution. The imager was fabricated in a standard
CMOS 150 nm 5-metal digital process from LFoundry.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the ultra-high speed (UHS) video capturing results of time dependent dielectric breakdown (TDDB) of
MOS capacitors using the UHS camera with the maximum frame rate of 10M frame per second (fps) are reported. In
order to capture the breakdown, we set a trigger circuit which detects the rapid current increase through the MOS
capacitor. Some movies have succeeded to capture the intermittent light emissions on some points of the gate during the
breakdown. From the movies taken at 100K to 1M fps, the distribution centers of the light emission time and the period
were 10 sec and 30 μsec, respectively. From the movies taken at 10M fps, the light emission time and the period were
less than 10 μsec. The random failure mode has higher percentage of single light emissions than that of the wear-out
failure mode, indicating a correlation between of the light emission mode and the TDDB failure mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An innovative smart image sensor architecture based on event-driven asynchronous functioning is presented in this paper. The proposed architecture has been designed in order to control the sensor data flow by extracting only the relevant information from the image sensor and performing spatial and temporal redundancies suppression in video streaming. We believe that this data flow reduction leads to a system power consumption reduction which is essential in mobile devices.
In this first proposition, we present our new pixel behaviour as well as our new asynchronous read-out architecture. Simulations using both Matlab and VHDL were performed in order to validate the proposed pixel behaviour and the reading protocol. These simulations results have met our expectations and confirmed the suggested ideas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most image sensors mimic film, integrating light during an exposure interval and then reading the "latent" image as a complete frame. In contrast, frameless image capture attempts to construct a continuous waveform for each sensel describing how the Ev (exposure value required at each sensel) changes over time. This is done using an array of on-sensor nanocontrollers, each independently and asynchronously sampling its sensel to interpolate a smooth waveform. Still images are computationally extracted after capture using the average value of each sensel’s waveform over the desired interval. Thus, image frames can be extracted to represent any interval(s) within the captured period. Because the extraction of a frame is done using waveforms that are continuous time-varying functions, an Ev estimate is always available, even if a particular sensel was not actually sampled during the desired interval. The result is HDR (high dynamic range) with a low and directly controllable Ev noise level. This paper describes our work toward building a frameless imaging sensor using nanocontrollers, basic processing of time domain continuous image data, and the expected benefits and problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In computer vision, local descriptors permit to summarize relevant visual cues through feature vectors. These vectors constitute inputs for trained classifiers which in turn enable different high-level vision tasks. While local descriptors certainly alleviate the computation load of subsequent processing stages by preventing them from handling raw images, they still have to deal with individual pixels. Feature vector extraction can thus become a major limitation for conventional embedded vision hardware. In this paper, we present a power-efficient sensing processing array conceived to provide the computation of integral images at different scales. These images are intermediate representations that speed up feature extraction. In particular, the mixed-signal array operation is tailored for extraction of Haar-like features. These features feed the cascade of classifiers at the core of the Viola-Jones framework. The processing lattice has been designed for the standard UMC 0.18μm 1P6M CMOS process. In addition to integral image computation, the array can be reprogrammed to deliver other early vision tasks: concurrent rectangular area sum, block-wise HDR imaging, Gaussian pyramids and image pre-warping for subsequent reduced kernel filtering.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an architecture and achievable performance for a time-to-digital converter, for 3D time-of-flight cameras. This design is partitioned in two levels. In the first level, an analog time expansion, where the time interval to be measured is stretched by a factor k, is achieved by charging a capacitor with current I, followed by discharging the capacitor with a current I/k. In the second level, the final time to digital conversion is performed by a global gated ring oscillator based time-to-digital converter. The performance can be increased by exploiting its properties of intrinsic scrambling of quantization noise and mismatch error, and first order noise shaping. The stretched time interval is measured by counting full clock cycles and storing the states of nine phases of the gated ring oscillator. The frequency of the gated ring oscillator is approximately 131 MHz, and an appropriate stretch factor k, can give a resolution of ≈ 57 ps. The combined low nonlinearity of the time stretcher and the gated ring oscillator-based time-to-digital converter can achieve a distance resolution of a few centimeters with low power consumption and small area occupation. The carefully optimized circuit configuration achieved by using an edge aligner, the time amplification property and the gated ring oscillator-based time-to-digital converter may lead to a compact, low power single photon configuration for 3D time-of-flight cameras, aimed for a measurement range of 10 meters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a CMOS image sensor with 33 million pixels and 120 frames per second (fps) for Super Hi-Vision (SHV:8K version of UHDTV). There is a way to reduce the fixed pattern noise (FPN) caused in CMOS image sensors by using digital correlated double sampling (digital CDS), but digital CDS methods need high-speed analog-to-digital conversion and are not applicable to conventional UHDTV image sensors due to their speed limit. Our image sensor, on the other hand, has a very fast analog-to-digital converter (ADC) using “two-stage cyclic ADC” architecture that is capable of being driven at 120-fps, which is double the normal frame rate for TV. In this experiment, we performed experimental digital CDS using the high-frame rate UHDTV image sensor. By reading the same row twice at 120-fps and subtracting dark pixel signals from accumulated pixel signals, we obtained a 60-fps equivalent video signal with digital noise reduction. The results showed that the VFPN was effectively reduced from 24.25 e-rms to 0.43 e-rms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most of time-of-flight (ToF) cameras have a 2-tap pixel structure for demodulating a reflected near infrared (NIR) from objects. In order to eliminate the asymmetry between two taps in the pixel, a ToF camera needs another measurement, which collects photo-generated electrons from reflected NIR by inverting the phase of clock signals to transfer gates. This asymmetry removal needs additional frame memories and suppresses the frame rate due to the additional timing budget. In this paper, we propose novel asymmetry removal scheme without timing and area overheads by employing 2×2 shared 2-tap pixels with cross-connected transfer gates. The 2-tap pixel is shared with neighbor pixels and transfer gates in the pixel are cross-connected between upper and lower pixels. In order to verify the proposed pixel architecture, an electron charge generated in floating diffusion is simulated. And then we try to calculate a depth from camera to objects using simulated electron charge and measure a linearity of depth. In simulation result, proposed pixel architecture has more linear graph than conventional pixel structure along the real distance of objects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high speed Lateral Electric Field Modulator (LEFM) and lock-in pixels amplifiers for stimulated Raman
scattering (SRS)imager is presented. Since the generated signal from the SRS process is very small compared to
the offset signal, a technique suitable for extracting and amplifying the SRS signal is needed. The offset can be
canceled by tuning the phase delay between the demodulated pixel output signal and the sampling clock. The
small SRS signal in large offset is amplified by the differential integration. The proposed technique has been
investigated with an implementation of 64x8 pixels array using a pinned photodiode LEFM an lock-in pixels
amplifiers. Very small signal can be extracted from large offset signal. A ratio of the detected small SRS to
offset signal of less 10-5 is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach for darksignal-correction is presented that uses a model of each pixel's darksignal, which depends
on the sensor's settings (integration time and gain) and its temperature. It is shown how one can improve the
outcome of such a darksignal-correction strategy by using the darksignal of some pixels in order to compute
an estimate of the sensor's temperature. Experimental results indicate that the darksignals' dependency on
temperature and gain is more complex than considered in up-to-date darksignal models. In this paper it is
shown how one can cope with this complex behaviour when estimating the temperature out of the darksignal.
Experimental results indicate, that our method yields better results than using temperature measurements of
dedicated temperature sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We evaluated effective time constants of random telegraph noise (RTN) with various operation timings of in-pixel
source follower transistors statistically, and discuss the dependency of RTN time constants on the duty ratio (on/off ratio) of MOSFET which is controlled by the gate to source voltage (VGS). Under a general readout operation of CMOS image sensor (CIS), the row selected pixel-source followers (SFs) turn on and not selected pixel-SFs operate at different bias conditions depending on the select switch position; when select switch locate in between the SF driver and column output line, SF drivers nearly turn off. The duty ratio and cyclic period of selected time of SF driver depends on the operation timing determined by the column read out sequence. By changing the duty ratio from 1 to 7.6 x 10-3, time constant ratio of RTN (time to capture <τc<)/(time to emission <τe<) of a part of MOSFETs increased while RTN amplitudes were almost the same regardless of the duty ratio. In these MOSFETs, <τc< increased and the majority of <τe< decreased and the minority of <τe< increased by decreasing the duty ratio. The same tendencies of behaviors of <τc< and <τe< were obtained when VGS was decreased. This indicates that the effective <τc< and <τe< converge to those under off state as duty ratio decreases. These results are important for the noise reduction, detection and analysis of in pixel-SF with RTN.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on our extensive research on in-field defect development in digital imagers, we were able to determine that “Hot Pixels” are the main source of defects and their numbers increase at a nearly constant temporal rate (during the camera’s lifetime) and they are randomly distributed spatially (over the camera sensor). Using dark frame analysis, we concluded that a hot pixel can be characterized by a linear function of the exposure time to describe the defect’s dark response. Assuming that the hot pixel behavior remains constant after formation, we present in this paper a novel method for correcting the damage to images caused by a hot pixel, based on estimating its dark response parameters. We performed experiments on a camera with 28 known hot pixels allowing us to compare our correction algorithm to the conventional correction done by interpolating the four neighbors of the defective pixel. We claim that the correction method used should depend on the severity of the hot pixel, on the exposure time, on the ISO, and on the variability of the pixel’s neighbors in the specific image we are correcting. Furthermore, we discuss our new findings in hot pixel behavior that limit the accuracy of defect correction algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We compare the noise performance of two optimized readout chains that are based on 4T pixels and featuring the same bandwidth of 265kHz (enough to read 1Megapixel with 50frame/s). Both chains contain a 4T pixel, a column amplifier and a single slope analog-to-digital converter operating a CDS. In one case, the pixel operates in source follower configuration, and in common source configuration in the other case. Based on analytical noise calculation of both readout chains, an optimization methodology is presented. Analytical results are confirmed by transient simulations using 130nm process. A total input referred noise bellow 0.4 electrons RMS is reached for a simulated conversion gain of 160μV/e−. Both optimized readout chains show the same input referred 1/f noise. The common source based readout chain shows better performance for thermal noise and requires smaller silicon area. We discuss the possible drawbacks of the common source configuration and provide the reader with a comparative table between the two readout chains. The table contains several variants (column amplifier gain, in-pixel transistor sizes and type).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The aim of this article is to guide image sensors designers to optimize the analog-to-digital conversion of pixel outputs. The most common ADCs topologies for image sensors are presented and discussed. The ADCs specific requirements for these sensors are analyzed and quantified. Finally, we present relevant recent contributions of specific ADCs for image sensors and we compare them using a novel FOM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have been working on developing an image sensor with three stacked organic photoconductive films (OPFs)
sensitive to only one primary color component (red—R, green—G, or blue—B); each OPF has a signal readout circuit.
This type of stacked sensor is advantageous for the manufacture of compact color cameras with high-quality pictures,
since color separation systems, such as prisms or color filter arrays, are eliminated because of the color selectivity of
OPFs. To achieve a high-resolution stacked sensor, its total thickness should be reduced to less than 10 μm. In this study,
we fabricated a color image sensor with R and G-sensitive OPFs by applying amorphous In-Ga-Zn-O thin-film transistor
(TFT) readout circuits. A 10 μm-thick interlayer insulator separated the R and G-sensitive layers. The entire fabrication
process for the device was implemented below 150°C to avoid damaging the OPFs. Output signals were successfully
read from each OPF through the TFT circuit, and multi-color images were reproduced from the fabricated sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The next generation of multispectral sensors and cameras will need to deliver significant improvements in size, weight,
portability, and spectral band customization to support widespread commercial deployment. The benefits of
multispectral imaging are well established for applications including machine vision, biomedical, authentication, and
aerial remote sensing environments – but many OEM solutions require more compact, robust, and cost-effective
production cameras to realize these benefits. A novel implementation uses micro-patterning of dichroic filters into Bayer
and custom mosaics, enabling true real-time multispectral imaging with simultaneous multi-band image acquisition.
Consistent with color camera image processing, individual spectral channels are de-mosaiced with each channel
providing an image of the field of view. We demonstrate recent results of 4-9 band dichroic filter arrays in multispectral
cameras using a variety of sensors including linear, area, silicon, and InGaAs. Specific implementations range from
hybrid RGB + NIR sensors to custom sensors with application-specific VIS, NIR, and SWIR spectral bands. Benefits
and tradeoffs of multispectral sensors using dichroic filter arrays are compared with alternative approaches – including
their passivity, spectral range, customization options, and development path. Finally, we report on the wafer-level
fabrication of dichroic filter arrays on imaging sensors for scalable production of multispectral sensors and cameras.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we demonstrate two types of new photodiode array (PDA) with fast readout speed and high stability to
ultraviolet (UV) light exposure. One is a high full well capacity sensor specialized for absorption spectroscopy, the other
one is a high sensitivity sensor for emission spectroscopy. By introducing multiple readout paths along the long side of
the rectangle PD, both two PDAs have achieved more than 150 times faster readout speed compared with a general PDA
structure with a single readout path along the short side of PD. By introducing a photodiode (PD) structure with a thin
and steep dopant profile p+ layer formed on a flattened Si surface, a higher stability of the light sensitivity to UV light
exposure was confirmed compared with a general PD structure for conventional PDAs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed and evaluated the high responsivity and low dark leakage CMOS image sensor with the ring-gate shared-pixel design. A ring-gate shared-pixel design with a high fill factor makes it possible to achieve the low-light imaging. As eliminating the shallow trench isolation in the proposed pixel, the dark leakage current is significantly decreased because one of major dark leakage sources is removed. By sharing the in-pixel transistors such as a reset transistor, a select transistor, and a source follower amplifier, each pixel has a high fill-factor of 43 % and high sensitivity of 144.6 ke-/lx·sec. In addition, the effective number of transistors per pixel is 1.75. The proposed imager achieved the relatively low dark leakage current of about 104.5 e-/s (median at 60°C), corresponding to a dark current density Jdark_proposed of about 30 pA/cm2. In contrast, the conventional type test pixel has a large dark leakage current of 2450 e-/s (median at 60°C), corresponding to Jdark_conventional of about 700 pA/cm2. Both pixels have a same pixel size of 7.5×7.5 μm2 and are fabricated in same process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a CMOS light detector-actuator array, in which every pixel combines a spatial light modulator and a photodiode. It will be used in medical imaging based on acousto-optical coherence tomography with a digital holographic detection scheme. Our architecture is able to measure an interference pattern between a scattered beam transmitted through a scattering media and a reference beam. The array of 16 μm pixels pitch has a frame rate of several kfps, which makes this sensor compliant with the correlation time of light in biological tissues. In-pixel analog processing of the interference pattern allows controlling the polarization of a stacked light modulator and thus, to control the phase of the reflected beam. This reflected beam can then be focused on a region of interest, i.e. for therapy. The stacking of a photosensitive element with a spatial light modulator on the same chip brings a significant robustness over the state of the art such as perfect optical matching and reduced delay in controlling light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents new structure and method of charge modulation for CMOS ToF range image sensors using pinned
photodiodes. Proposed pixel structure, the draining only modulator (DOM), allows us to achieve high-speed charge
transfer by generating lateral electric field from the pinned photo-diode (PPD) to the pinned storage-diode (PSD).
Generated electrons by PPD are transferred to the PSD or drained off through the charge draining gate (TXD). This
structure realizes trapping-less charge transfer from the PPD to PSD. To accelerate the speed of charge transfer, the
generation of high lateral electric field is necessary. To generate the electric field, the width of the PPD is changed along
the direction of the charge transfer.
The PPD is formed by the p+ and n layer on the p-substrate. The PSD is created by doping another n type layer for
higher impurity concentration than that of the n layer in the PPD. This creates the potential difference between the PPD
and PSD. Another p layer underneath the n-layer of the PSD is created for preventing the injection of unwanted carrier
from the substrate to the PSD.
The range is calculated with signals in the three consecutive sub-frames; one for delay sensitive charge by setting the
light pulse timing at the edge of TXD pulse, another for delay independent charge by setting the light pulse timing during
the charge transfer, and the other for ambient light charge by setting the light pulse timing during the charge draining.
To increase the photo sensitivity while realizing high-speed charge transfer, the pixel consists of 16 sub-pixels and a
source follower amplifier. The outputs of 16 sub-pixels are connected to a charge sensing node which has MOS capacitor
for increasing well capacity. The pixel array has 313(Row) x 240(Column) pixels and the pixel pitch is 22.4μm. A ToF
range imager prototype using the DOM pixels is designed and implemented with 0.11um CMOS image sensor process.
The accumulated signal intensity in the PSD as a function of the TXD gate voltage is measured. The ratio of the signal
for the TXD off to the signal for the TXD on is 33:1. The response of the pixel output as a function of the light pulse
delay has also been measured.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present a 2D extension of a previously described 1D method for a time-to-impact sensor [5][6]. As in
the earlier paper, the approach is based on measuring time instead of the apparent motion of points in the image plane to
obtain data similar to the optical flow. The specific properties of the motion field in the time-to-impact application are
used, such as using simple feature points which are tracked from frame to frame. Compared to the 1D case, the features
will be proportionally fewer which will affect the quality of the estimation. We give a proposal on how to solve this
problem. Results obtained are as promising as those obtained from the 1D sensor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Millimeter wave (MMW) imaging systems are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is relatively low. The lack of inexpensive room temperature imaging systems makes it difficult to give a suitable MMW system for many of the above applications. 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with a Glow Discharge Detector (GDD) Focal Plane Array (FPA) of plasma based detectors. Each point on the object corresponds to a point in the image and includes the distance information. This will enable 3D MMW imaging. The radar system requires that the millimeter wave detector (GDD) will be able to operate as a heterodyne detector. Since the source of radiation is a frequency modulated continuous wave (FMCW), the detected signal as a result of heterodyne detection gives the object's depth information according to value of difference frequency, in addition to the reflectance of the image. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of GDD devices. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compressive Sensing (CS) is receiving increasing attention as a way to lower storage and compression requirements
for on-board acquisition of remote-sensing images. In the case of multi- and hyperspectral images, however,
exploiting the spectral correlation poses severe computational problems. Yet, exploiting such a correlation would
provide significantly better performance in terms of reconstruction quality. In this paper, we build on a recently
proposed 2D CS scheme based on blind source separation to develop a computationally simple, yet accurate,
prediction-based scheme for acquisition and iterative reconstruction of hyperspectral images in a CS setting.
Preliminary experiments carried out on different hyperspectral images show that our approach yields a dramatic
reduction of computational time while ensuring reconstruction performance similar to those of much more
complicated 3D reconstruction schemes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper a new technology, based on HyperSpectral Imaging (HSI) sensors, and related detection architectures, is
investigated in order to develop suitable and low cost strategies addressed to: i) preliminary detection and
characterization of the composition of the structure to dismantle and ii) definition and implementation of innovative
smart detection engines for sorting and/or demolition waste flow stream quality control. The proposed sensing
architecture is fast, accurate, affordable and it can strongly contribute to bring down the economic threshold above which
recycling is cost efficient. Investigations have been carried out utilizing an HSI device working in the range 1000-1700
nm: NIR Spectral Camera™, embedding an ImSpector™ N17E (SPECIM Ltd, Finland). Spectral data analysis was
carried out utilizing the PLS_Toolbox (Version 6.5.1, Eigenvector Research, Inc.) running inside Matlab® (Version
7.11.1, The Mathworks, Inc.), applying different chemometric techniques, selected depending on the materials under
investigation. The developed procedure allows assessing the characteristics, in terms of materials identification, such as
recycled aggregates and related contaminants, as resulting from end-of-life concrete processing. A good classification of
the different classes of material was obtained, being the model able to distinguish aggregates from other materials (i.e.
glass, plastic, tiles, paper, cardboard, wood, brick, gypsum, etc.).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new ToF measurement technique using an impulse photocurrent response. In the proposed technique, a laser with a short pulse width for light source, which can be regarded as an impulse input for a detector. As a result, the range calculation is determined only by photocurrent response of the detector. A test chip fabricated in a 0.11um CIS technology employs a draining-only modulation pixel, which enables a high speed charge modulation. The measurable range measured to be 50 mm within nonlinear error of 5% and the average range resolution of 0.21 mm is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Although High Dynamic Range (HDR) imaging has represented, in the recent years, the topic of important researches, it
has not reached yet an excellent level of the HDR scenes acquisition using the available components. Indeed, many solutions
have been proposed ranging from bracketing to the beamsplitter but none of these solutions is really consistent with
the moving scenes representing light’s level difference.
In this paper, we present an optical architecture, which exploits the stereoscopic cameras, ensuring the simultaneous
capture of four different exposures of the same image on four sensors with efficient use of the available light.
We also present a short description of the implemented fusion algorithm implemented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.