PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 6816, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wide dynamic image CMOS image sensor with a user adjustable logarithmic photo-response is presented. A pMOS switch and a time-dependent reference voltage are integrated into a three-transistor (3T) pixel structure to implement a logarithmic response. Several pixels have been manufactured using a 0.25μm standard CMOS technology. Compared to the conventional logarithmic response pixel based on a diode-connected transistor, the proposed pixel combines a wide dynamic range of 120dB with much higher responsivity (250mV/decade) and better dark response.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The dynamic range of a scene is usually higher than the dynamic range of the sensor used to acquire the image. Design
optimizations must be found that increase the intra-scene dynamic range a sensor can achieve. Good dynamic range is
necessary to image a scene with the required details and contrast in a single image. The first topic addressed by the paper
is the definition of intra-scene dynamic range. The paper will detail past and present techniques to increase the dynamic
range of snapshot CMOS image sensors and show the necessary future developments in high dynamic range imaging.
The technologies shown are used by various image sensor manufacturers, only a portion thereof are used in Melexis
devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A temperature-resistant 1/3 inch SVGA (800×600 pixels) 5.6 μm pixel pitch wide-dynamic-range (WDR) CMOS image
sensor has been developed using a lateral-over-flow-integration-capacitor (LOFIC) in a pixel. The sensor chips are
fabricated through 0.18 μm 2P3M process with totally optimized front-end-of-line (FEOL) & back-end-of-line (BEOL)
for a lower dark current. By implementing a low electrical field potential design for photodiodes, reducing damages,
recovering crystal defects and terminating interface states in the FEOL+BEOL, the dark current is improved to 12 e-
/pixel-sec at 60 deg.C with 50% reduction from the previous very-low-dark-current (VLDC) FEOL and its contribution to the temporal noise is improved. Furthermore, design optimizations of the readout circuits, especially a signal-and noise-hold circuit and a programmable-gain-amplifier (PGA) are also implemented. The measured temporal noise is 2.4 e-rms at 60 fps (:36 MHz operation). The dynamic-range (DR) is extended to 100 dB with 237 ke- full well capacity. In order to secure the temperature-resistance, the sensor chip also receives both an inorganic cap onto micro lens and a metal hermetic seal package assembly. Image samples at low & high temperatures show significant improvement in image qualities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Operation methods for high frame rate, linear response, wide dynamic range (DR) and high SNR in a CMOS image sensor are discussed. The high frame rate operation is realized by the optimum design of the floating diffusion capacitor, the lateral overflow integration capacitor, the column integration capacitor and the integration periods of multiple voltage and current readout operations. The color CMOS image sensor which consists of the 1/3-inch, 800H × 600V pixels and 5.6-μm pixel pitch with a buried pinned-photodiode, a transfer switch, a reset switch, a lateral overflow switch, a lateral overflow integration capacitor, a photocurrent readout switch, a source follower transistor and a pixel select switch in each pixel has been fabricated by 0.18-μm 2P3M CMOS technology. The image sensor operates the total frame rate of 13-fps with three-time voltage readout operations and one current readout operation and have realized full linear photoelectric conversion responses, over 20-dB SNR for the image of the 18-% gray card at all integration operation switching points and the over 200-dB DR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a dynamic range expansion technique of CMOS image sensors with dual charge storage in a pixel
and multiple exposures. Each pixel contains two photodiodes, PD1 and PD2 whose sensitivity can be set independently
by the accumulation time. The difference of charge accumulation time in both photodiode can be manipulated to expand
the dynamic range of the sensor. It allows flexible control of the dynamic range since the accumulation time in PD2 is
adjustable. The multiple exposure technique used in the sensor reduces the motion blur in the synthesized wide dynamic
range image when capturing fast-moving objects. It also reduces the signal-to-nose ratio dip at the switching point of the
PD1 signal to the PD2 signals in the synthesized wide dynamic range image. A wide dynamic range camera with
320x240 pixels image sensor has been tested. It is found that the sampling of 4 times for the short accumulation time
signals is sufficient for the reduction of motion blur in the synthesized wide dynamic range image, and the signal-to-noise
ratio dip at the switching point of the PD1 signal to the PD2 signal is improved by 6 dB using 4 short-time
exposures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present here a study on both CMOS sensors and elementary structures (photodiodes and in-pixel MOSFETs) manufactured in a deep submicron process dedicated to imaging. We designed a test chip made of one 128×128-3T-pixel array with 10 μm pitch and more than 120 isolated test structures including photodiodes and MOSFETs with various implants and different sizes. All these devices were exposed to ionizing radiation up to 100 krad and their responses were correlated to identify the CMOS sensor weaknesses. Characterizations in darkness and under illumination demonstrated that dark current increase is the major sensor degradation. Shallow trench isolation was identified to be responsible for this degradation as it increases the number of generation centers in photodiode depletion regions. Consequences on hardness assurance and hardening-by-design are discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a new FDTD-based optical simulation model dedicated to describe the optical performances of CMOS image sensors taking into account diffraction effects.
Following market trend and industrialization constraints, CMOS image sensors must be easily embedded into even smaller packages, which are now equipped with auto-focus and short-term coming zoom system. Due to miniaturization, the ray-tracing models used to evaluate pixels optical performances are not accurate anymore to describe the light propagation inside the sensor, because of diffraction effects. Thus we adopt a more fundamental description to take into account these diffraction effects: we chose to use Maxwell-Boltzmann based modeling to compute the propagation of light, and to use a software with an FDTD-based (Finite Difference Time Domain) engine to solve this propagation.
We present in this article the complete methodology of this modeling: on one hand incoherent plane waves are propagated to approximate a product-use diffuse-like source, on the other hand we use periodic conditions to limit the size of the simulated model and both memory and computation time. After having presented the correlation of the model with measurements we will illustrate its use in the case of the optimization of a 1.75&mgr;m pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The reliability of solid-state image sensors is limited by the development of defects, particularly hot-pixels, which we have previously shown develop continuously over the sensor lifetime. Our statistical analysis based on the distribution and development date of defects concluded that defects are not caused by single traumatic incident or material failure, but rather by an external process such as radiation. This paper describes an automated process for extracting defect temporal growth data, thereby enabling a very wide sample of cameras to be examined and studied. The algorithm utilizes Bayesian statistics to determine the presence and absence of defects by searching through sets of color photographs. Monte Carlo simulations on a set of images taken at 0.06 to 0.5sec exposures demontrating that our tracing algorithm is able to pinpoint the defect development date for all the identified hot pixels within ±2 images. Although a previous study has shown that in-field defects are isolated from each other, image processing functions applied by cameras such as the demosaicing algorithm were found to casue a single defective pixel to appear as a cluster in a color image, increasing the challenge pinpointing the exact location of hot defects.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thermal excitation of electrons is a major source of noise in Charge-Coupled Device (CCD) imagers. Those electrons are generated even in the absence of light, hence the name dark current. Dark current is particularly important for long exposure times and elevated temperatures. The standard procedure to correct for dark current is to take several pictures under the same condition as the real image, except with the shutter closed. The resulting dark frame is later subtracted from the exposed image. We address the question of whether the dark current produced in an image taken with a closed shutter is identical to the dark current produced in an exposure in the presence of light. In our investigation, we illuminated two different CCD chips to different intensities of light and measured the dark current generation. A surprising conclusion of this study is that some pixels produce a different amount of dark current under illumination. Finally, we discuss the implications that this has for dark frame image correction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present data for the dark current of a commercially available CMOS image sensor for different gain settings and bias offsets over the temperature range of 295 to 340 K and exposure times of 0 to 500 ms. The analysis of hot pixels shows two different sources of dark current. One source results in hot pixels with high but constant count for exposure times smaller than the frame time. Other hot pixels exhibit a linear increase with exposure time. We discuss how these hot pixels can be used to calculate the dark current for all pixels. Finally, we show that for low bias settings with universally zero counts for the dark frame one still needs to correct for dark current. The correction of thermal noise can therefore result in dark frames with negative pixel values. We show how one can calculate dark frames with negative pixel count.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A thermal noise calculation model of high-gain switched-capacitor column noise cancellers for CMOS image sensors is presented. In the high-gain noise canceller with a single noise cancelling stage, the reset noise of the readout circuits dominates the noise at high gain. Using the double-stage architecture using a switched-capacitor gain stage and a sample-and-hold stage using two sampling capacitors, the reset noise of the gain stage can be cancelled. The resulting input referred thermal noise power of high-gain double-stage switched-capacitor noise canceller is revealed to be proportional to (g_a/g_s)/GC_L where g_a, G and C_L are the transconductance, gain and output capacitance of the amplifier, respectively, and g_s is the output conductance of an in-pixel source follower. An important contribution of the proposed noise calculation formula is the inclusion of the influence of the transconductance ratio of the amplifier to that of the source follower. For low-noise design, it is important that the transconductance of the amplifier used in the noise canceller is minimized under the condition of meeting the required response time of the switched capacitor amplifier which is inversely proportional to the cutoff angular frequency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents work undertaken into the development of an automated air-based vision system for assessing
the performance of an approach lighting system (ALS) installation in accordance with International Civil Aviation
Organisation (ICAO) standards. The measuring device consists of an image sensor with associated lens system
fitted to the interior of an aircraft. The vision system is capable of capturing sequences of airport lighting images
during a normal approach to the airport. These images are then processed to determine the uniformity of the
ALS.
To assess the uniformity of the ALS the luminaires must first be uniquely identified and tracked through an
image sequence. A model-based matching technique is utilised which uses a camera projection system to match
a set of template data to the extracted image data. From the matching results the associated position and pose
of the camera is estimated.
Each luminaire emits an intensity which is dependant on its angular displacement from the camera. As
such, it is possible to predict the intensity that each luminaire within the ALS emits during an approach.
Luminaires emitting the same intensity are banded together for the uniformity analysis. Uniformity assumes
that luminaires in close proximity exhibit similar luminous intensity characteristics. During a typical approach
grouping information is obtained for the various sectors of luminaires. This grouping information is used to
compare luminaires against one another in terms of their extracted grey level information. The developed
software is validated using data acquired during an actual approach to a UK airport.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a proof-of-concept implementation that uses a high dynamic range CMOS video camera to integrate daylight harvesting and occupancy sensing functionalities. It has been demonstrated that the proposed concept not only circumvents several drawbacks of conventional lighting control sensors, but also offers functionalities that are not currently achievable by these sensors. The prototype involves three algorithms, daylight estimation, occupancy detection and lighting control. The calibrated system directly estimates luminance from digital images of the occupied room for use in the daylight estimation algorithm. A novel occupancy detection algorithm involving color processing in YCC space has been developed. Our lighting control algorithm is based on the least squares technique. Results of a daylong pilot test show that the system i) can meet different target light-level requirements for different task areas within the field-of-view of the sensor, ii) is unaffected by direct sunlight or a direct view of a light source, iii) detects very small movements within the room, and iv) allows real-time energy monitoring and performance analysis. A discussion of the drawbacks of the current prototype is included along with the technological challenges that will be addressed in the next phase of our research.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The idea of image formation using a new kind of metamaterials based on hierarchically organized mirror channels with semitransparent walls, metamirror structures (MMS) is analyzed. In case the MMS's have relatively small cell sizes from one to tens microns they are mechanically solid materials having abnormal optical properties in wide wavelength region beginning with visual and even near infrared wavelengths up to hard UV and soft X-ray diapason. The ray transmission and reflection is considered for various MMS geometries and types. It is shown that one-step reflection 2D MMS's based on rectangular elementary cells being properly curved possess lens properties and MMS lens generalized law is derived. Properties of a mirror lens prototype with finite meta-focus position made from one layer 2D and 3D MMS with reflecting walls are evaluated theoretically. Micro-machine mirror systems and modified macroporous structures are discussed as mirror lens devices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Long neglected as unimportant, the dark current that arises due to diffusion from the bulk is assuming a more important role now that CCD and CMOS imagers are finding their way into consumer electronics which must be capable of operating at elevated temperatures. Historically this component has been estimated from the diffusion related current of a diode with an infinite substrate. This paper explores the effect of a substrate of finite extent beneath the collecting volume of the pixel for both a front-illuminated device and a thinned back-illuminated device and develops corrected expressions for the diffusion related dark current. The models show that the diffusion dark current can be much less than that predicted by the standard model
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A hyperspectral imaging system is in development. The system uses spatially modulated Hadamard patterns to encode image information with implicit stray and ambient light correction and a reference beam to correct for source light changes over the spectral image capture period. In this study we test the efficacy of the corrections and the multiplex advantage for our system. The signal to noise ratio (SNR) was used to demonstrate the advantage of spatial multiplexing in the system and observe the effect of the reference beam correction. The statistical implications of the data acquisition technique, illumination source drift and correction of such drift, were derived. The reference beam correction was applied per spectrum before Hadamard decoding and alternately after decoding to all spectra in the image. The reference beam method made no fundamental change to SNR, therefore we conclude that light source drift is minimal and other possibly rectifiable error sources are dominant. The multiplex advantage was demonstrated ranging from a minimum SNR boost of 1.5 (600-975 nm) to a maximum of 11 (below 500 nm). Intermediate SNR boost was observed in 975-1700 nm. The large variation in SNR boost is also due to some other error source.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new measurement scheme,
called phase-stamp imaging using correlation image sensor(CIS).
The correlation image sensor,
developed by us, is the device which outputs the temporal
correlation between incident light intensity and
three reference signals which is common among whole pixel.
By using the correlation image sensor
with the single frequency reference signals,
the time at which light spot is getting through
is embedded in the form of the phase of the reference signals in each pixel.
It provides single frame high resolution measurement of the 2D velocity field,
and also good time resolution for transient phenomenon.
We apply this scheme to fluid flow measurement.
We show some experimental results and confirmed its performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The optical flow computation has been widely studied motivated by a broad range of applications. We proposed a solution base on optical identity (OFI) using Correlation image sensor (CIS). If the frequency of the CIS' reference signals is ill-chosen, the SNR of the amplitude of the complex correlation image is small, and it is difficult to solve OFI stably. When we choose the frequency of the sinusoidal reference signals "w" satisfies "wT=2n Pi" the optical flow identity (OFI) holds, where "n" is an integer, "T" is the frame time of the CIS, and "Pi" is the circle ratio. We define "G(w)" satisfying "wT=2nPi" by the sum of the amplitude of the complex sinusoidally modulated image over all pixels, and maximize it to give a solution of the frequency-tuned problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Preservation of spectral information and the enhancement of spatial resolution are regarded as very important in satellite
image fusion. In previous research, many algorithms simultaneously unsolved these problems, or needed experimental
parameters to enhance fusion performance. This paper proposed a new fusion method based on fast intensity-huesaturation
(FIHS) to merge a high-resolution panchromatic image with a low-resolution multispectral image. It is conducted by multiple regressions for generating synthetic image and statistical ratio-based image enhancement, which is presented as solving the spectral distortion and conserving the spatial information of the panchromatic image. IKONOS datasets were employed in the evaluation. The results showed that the proposed method was better than the widely used image fusion methods, including the FIHS-based method and the Pan Sharpening module in PCI Geomatica. We compared widely used algorithms with adaptive FIHS image fusion using various fusion quality Indexes such as ERGAS, RASE, correlation, and the Q4 index. The images obtained from the proposed algorithm present higher spectral and spatial quality than the results from using other fusion methods. Therefore, the proposed algorithm is very efficient for high-resolution satellite image fusion with an automatic process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
On-die optics have been proposed for stand-alone image sensors. Previous works by the authors have proposed fabricating diffractive optical elements using the upper metal layers in a commercial CMOS process. This avoids the cost associated with process steps associated with microlens fabrication, but results in a point spread function that varies with the wavelength, angle, and polarization of incident light. Wavelength and angle sensitivities have been addressed by previous works. This paper models the effects of polarization on the point spread function of the imaging system, and proposes optical and algorithmic methods for compensating for these effects. The imaging behaviors of the resulting systems are evaluated. Simulations indicate that the uncorrected system can locate point sources to within +/-0.1 radian, and polarized point sources to within +/-0.05 radian along the axis of polarization. A system is described that uses a polarization-insensitive optical element and a deconvolution filter to achieve a corrected resolution pf +/-0.05 radian, with the ability to perform imaging of non-point sources with white light illumination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high sensitivity and high full well capacity CMOS image sensor using active pixel readout feedback operation with positions of pixel select switch, operation timings and initial bias conditions has been reported. 1/3-inch 5.6-μm pixel pitch 800(H) x 600(V) color CMOS image sensors with the switch X set on or under the pixel SF have been fabricated by a 0.18-μm 2-Poly 3-Metal CMOS technology. The comparison of the active pixel readout feedback operation between two CMOS image sensors, which only have the deference of the switch X's position, has performed. As to the result, the switch X set on the pixel SF is favor for the active pixel readout feedback operation to improve the readout gain and the S/N ratio. This CMOS image sensor achieves high readout gain, high conversion gain, low input-referred noise and high full well capacity by the active pixel readout feedback operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper firstly presents an asynchronous analog to digital technique that is well suited for an in-pixel implementation
in an X-ray or Infra-Red image sensor. The principle which consists in counting charge packets coming from the detector
is also called "charge-balancing technique". Simulation and experimental results on a 0.13μm process test-chip are given
and a 16 bit dynamic range is reached. Secondly a new enhancement method is described. This method controls the LSB
of the A/D conversion as the input current (from the detector) varies, so that a floating point coding is carried out. The
consequences are a wider dynamic range (19 bits at least) as well as a reduction of the technological fluctuations
between two different pixels. On this work in progress, implementation in a 150x150μm2 pixel is briefly commented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interactive Paper and Symposium Demonstration Session
A preclinical laboratory animal imaging modality similar to microangiography, with spatial resolution as high as 6 &mgr;m, has been developed at SPring-8 using an X-ray direct-conversion type detector incorporating an X-ray SATICON pickup tube. The imaging modality is intended to provide a basic understanding of disease mechanisms. In synchrotron radiation radiography, a long source-to-object distance and a small source spot can produce high-resolution images with spatial resolution in the micrometer range. Synchrotron radiation microangiography presents the main advantage of depicting the anatomy of small blood vessels with tens of micrometers' diameter. We performed cerebral microangiography in rats and mice and particularly undertook radiographical evaluation of changes in small arteries located deep in the brain; such vessels had not been observed and studied previously. Moreover, an X-ray direct-conversion type solid-state imager with spatial resolution in the micrometer range is being designed for large field-of-view imaging. This study is also intended to clarify requirements related to specifications of prospective solid-state image sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.