PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Peter Schelkens,1 Touradj Ebrahimi,2 Gabriel Cristóbal,3 Frédéric Truchetet,4 Pasi Saarikko5
1Vrije Univ. Brussel (Belgium) 2Ecole Polytechnique Fédérale de Lausanne (Switzerland) 3Consejo Superior de Investigaciones Científicas (Spain) 4Univ. de Bourgogne (France) 5Microsoft Oy (Finland)
This PDF file contains the front matter associated with SPIE Proceedings Volume 9138, including the Title Page, Copyright Information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Special Session on High-Dynamic Range Imaging and Privacy
The ability of High Dynamic Range imaging (HDRi) to capture details in high-contrast environments, making both dark and bright regions clearly visible, has a strong implication on privacy. However, the extent to which HDRi affects privacy when it is used instead of typical Standard Dynamic Range imaging (SDRi) is not yet clear. In this paper, we investigate the effect of HDRi on privacy via crowdsourcing evaluation using the Microworkers platform. Due to the lack of HDRi standard privacy evaluation dataset, we have created such dataset containing people of varying gender, race, and age, shot indoor and outdoor and under large range of lighting conditions. We evaluate the tone-mapped versions of these images, obtained by several representative tone-mapping algorithms, using subjective privacy evaluation methodology. Evaluation was performed using crowdsourcing-based framework, because it is a popular and effective alternative to traditional lab-based assessment. The results of the experiments demonstrate a significant loss of privacy when even tone-mapped versions of HDR images are used compared to typical SDR images shot with a standard exposure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High Dynamic Range (HDR) imaging has been gaining popularity in recent years. Different from the traditional low dynamic range (LDR), HDR content tends to be visually more appealing and realistic as it can represent the dynamic range of the visual stimuli present in the real world. As a result, more scene details can be faithfully reproduced. As a direct consequence, the visual quality tends to improve. HDR can be also directly exploited for new applications such as video surveillance and other security tasks. Since more scene details are available in HDR, it can help in identifying/tracking visual information which otherwise might be difficult with typical LDR content due to factors such as lack/excess of illumination, extreme contrast in the scene, etc. On the other hand, with HDR, there might be issues related to increased privacy intrusion. To display the HDR content on the regular screen, tone-mapping operators (TMO) are used. In this paper, we present the universal method for TMO parameters tuning, in order to maintain as many details as possible, which is desirable in security applications. The method’s performance is verified on several TMOs by comparing the outcomes from tone-mapping with default and optimized parameters. The results suggest that the proposed approach preserves more information which could be of advantage for security surveillance but, on the other hand, makes us consider possible increase in privacy intrusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The gaining popularity of the new High Dynamic Range (HDR) imaging systems is raising new privacy issues caused by the methods used for visualization. HDR images require tone mapping methods for an appropriate visualization on conventional and non-expensive LDR displays. These visualization methods might result in completely different visualization raising several issues on privacy intrusion. In fact, some visualization methods result in a perceptual recognition of the individuals, while others do not even show any identity. Although perceptual recognition might be possible, a natural question that can rise is how computer based recognition will perform using tone mapping generated images? In this paper, a study where automatic face recognition using sparse representation is tested with images that result from common tone mapping operators applied to HDR images. Its ability for the face identity recognition is described. Furthermore, typical LDR images are used for the face recognition training.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes an algorithm for determining the exposure times for High Dynamic Range (HDR) video by adapting them to the current lighting. Due to the limited capability of the sensor used, only two exposures per frame can be taken. For each image, the histogram for the exposure time of the other image is estimated. When this histogram is subtracted from the original, the result can be used to adjust the exposure time to the current lighting situation by standard exposure control algorithms. The subtraction removes the dynamic range already covered by the other image, thus the exposure time can be optimized for the residual dynamic range. The algorithm has been compared with state-of-the-art algorithms for HDR imaging. It is proven to have comparable results in mean squared error to a ground truth gained from real-world data. Furthermore this algorithm is capable of running during the capturing process of a video, since it doesn't require additional exposures than those already taken.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High Dynamic Range (HDR) imaging has been the subject of significant researches over the past years, the goal of acquiring the best cinema-quality HDR images of fast-moving scenes using an efficient merging algorithm has not yet been achieved. In fact, through the years, many efficient algorithms have been implemented and developed. However, they are not yet efficient since they don't treat all the situations and they have not enough speed to ensure fast HDR image reconstitution. In this paper, we will present a full comparative analyze and study of the available fusion algorithms. Also, we will implement our personal algorithm which can be more optimized and faster than the existed ones. We will also present our investigated algorithm that has the advantage to be more optimized than the existing ones. This merging algorithm is related to our hardware solution allowing us to obtain four pictures with different exposures.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Inspired by nature, many application domains might gain from combining the multi-channel design of the compound eyes of insects and the refocusing capability of the human eye in one compact configuration. Multi-channel refocusing imaging systems are nowadays only commercially available in bulky and expensive designs since classical refocusing mechanisms cannot be integrated in a miniaturized configuration. We designed a wafer-level multi-resolution two-channel imaging system with refocusing capabilities using a voltage tunable liquid lens. One channel is able to capture a wide field-of-view image (2x40°) of a surrounding with a low angular resolution (0.078°), whereas a detailed image of a small region of interest (2x7.57°) can be obtained with the high angular resolution channel (0.0098°). The latter high angular resolution channel contains the tunable lens and therefore also the refocusing capabilities. In this paper, we first discuss the working principle, tunability and optical quality of a voltage tunable liquid lens. Based on optical characterization measurements with a Mach-Zehnder interferometer, we designed a tunable lens model. The designed tunable lens model and its validation in an imaging setup show a diffraction-limited image quality. Following, we discuss the performance of the designed two-channel imaging system. Both the wide field-of-view and high angular resolution optical channels show a diffraction-limited performance, ensuring a good image quality. Moreover, we obtained an improved depth-of-field, from 0.254m until infinity, in comparison with the current state-of-the art published wafer-level multi-channel imaging systems, which show a depth-of-field from 9m until infinity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatial resolution of imaging systems for airborne and space-borne remote sensing are often limited by image degradation resulting from mechanical vibrations of platforms during image exposure. A straightforward way to overcome this problem is to actively stabilize the optical axis or drive the focal plane synchronous to the motion image during exposure. Thus stabilization imaging system usually consists of digital image motion estimation and micromechanical compensation. The performance of such kind of visual servo system is closely related to precision of motion estimation and time delay. Large time delay results in larger phase delay between motion estimation and micromechanical compensation, and leads to larger uncompensated residual motion and limited bandwidth. The paper analyzes the time delay caused by image acquisition period and introduces a time delay compensation method based on SVM (Support Vector Machine) motion prediction. The main idea to cancel the time delay is to predict the current image motion from delayed measurements. A support vector machine based method is designed to predict the image motion. A prototype of stabilization imaging system has been implemented in the lab. To analyze the influences of time delay on system performance and to verify the proposed time delay cancelation method, comparative experiments over various frequencies of vibration are taken. The experimental results show that, the accuracy of motion compensation and the bandwidth of the system can be significantly improved with time delay cancelation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multichannel imaging systems have several potential applications such as multimedia, surveillance, medical imaging and machine vision, and have therefore been a hot research topic in recent years. Such imaging systems, inspired by natural compound eyes, have many channels, each covering only a portion of the total field-of-view of the system. As a result, these systems provide a wide field-of-view (FOV) while having a small volume and a low weight. Different approaches have been employed to realize a multichannel imaging system. We demonstrated that the different channels of the imaging system can be designed in such a way that they can have each different imaging properties (angular resolution, FOV, focal length). Using optical ray-tracing software (CODE V), we have designed a miniaturized multiresolution imaging system that contains three channels each consisting of four aspherical lens surfaces fabricated from PMMA material through ultra-precision diamond tooling. The first channel possesses the largest angular resolution (0.0096°) and narrowest FOV (7°), whereas the third channel has the widest FOV (80°) and the smallest angular resolution (0.078°). The second channel has intermediate properties. Such a multiresolution capability allows different image processing algorithms to be implemented on the different segments of an image sensor. This paper presents the experimental proof-of-concept demonstration of the imaging system using a commercial CMOS sensor and gives an in-depth analysis of the obtained results. Experimental images captured with the three channels are compared with the corresponding simulated images. The experimental MTF of the channels have also been calculated from the captured images of a slanted edge target test. This multichannel multiresolution approach opens the opportunity for low-cost compact imaging systems that can be equipped with smart imaging capabilities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-of-Flight (ToF) methods are used in different applications for depth measurements. There are mainly 2 types of ToF measurements, Pulsed Time-of-Flight and Continuous-Wave Time-of-Flight. Pulsed Time-of-Flight (PToF) techniques are mostly used in combination with a scanning mirror, which makes them not well suited for imaging purposes. Continuous-wave Time-of-Flight (CWToF) techniques are mostly used wide-field, hence they are much faster and more suited for imaging purposes but cannot be used behind partially-reflective surfaces. In commercial applications, both ToF methods require specific hardware, which cannot be exchanged. In this paper, we discuss the transformation of a CWToF sensor to a PToF camera, which is able to make images and measure the distances of objects behind a partially-reflective surface, like the air-water interface in swimming pools when looking from above. We first created our own depth camera which is suitable for both CWToF and PToF. We describe the necessary hardware components for a normal ToF camera and compare it with the adapted components which make it a range-gating depth imager. Afterwards, we modeled the distances and images of one or more objects positioned behind a partially-reflective surface and combine it with measurement data of the optical pulse. A scene was virtualized and the rays from a raytracing software tool were exported to Matlab™. Subsequently, pulse deformations were calculated for every pixel, which resulted in the calculation of the depth information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Special Session on Holography and Laser-based Projection Systems
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the advent of modern computing and imaging technologies, the use of digital holography became practical in many applications such as microscopy, interferometry, non-destructive testing, data encoding, and certification. In this respect the need for an efficient representation technology becomes imminent. However, microscopic holographic off-axis recordings have characteristics that differ significantly from that of regular natural imagery, because they represent a recorded interference pattern that mainly manifests itself in the high-frequency bands. Since regular image compression schemes are typically based on a Laplace frequency distribution, they are unable to optimally represent such holographic data. However, unlike most image codecs, the JPEG 2000 standard can be modified to efficiently cope with images containing such alternative frequency distributions by applying the arbitrary wavelet decomposition of Part 2. As such, employing packet decompositions already significantly improves the compression performance for off-axis holographic images over that of regular image compression schemes. Moreover, extending JPEG 2000 with directional wavelet transforms shows even higher compression efficiency improvements. Such an extension to the standard would only require signaling the applied directions, and would not impact any other existing functionality. In this paper, we show that wavelet packet decomposition combined with directional wavelet transforms provides efficient lossy-to-lossless compression of microscopic off-axis holographic imagery.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser projection devices should be designed to maximize their luminous efficacy and color gamut. This is for two main reasons. Firstly, being either stand alone devices or embedded in other products, they could be powered by battery, and lifetime is an important factor. Secondly, the increasing use of lasers to project images calls for a consideration of eye safety issues. The brightness of the projected image may be limited by the Class II accessible emission limit. There is reason to believe that current laser beam scanning projection technology is already close to the power ceiling based on eye safety limits. Consequently, it would be desirable to improve luminous efficacy to increase the output luminous flux whilst maintaining or improving color gamut for the same eye-safe optical power limit. Here we present a novel study about the combination of four laser wavelengths in order to maximize both color gamut and efficacy to produce the color white. Firstly, an analytic method to calculate efficacy as function of both four laser wavelengths and four laser powers is derived. Secondly we provide a new way to present the results by providing the diagram efficacy vs color gamut area that summarizes the performance of any wavelength combination for projection purposes. The results indicate that the maximal efficacy for the D65 white is only achievable by using a suitable combination of both laser power ratios and wavelengths.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical image analysis has become an important tool for improving medical diagnosis and planning treatments. It involves volume or still image segmentation that plays a critical role in understanding image content by facilitating extraction of the anatomical organ or region-of-interest. It also may help towards the construction of reliable computer-aided diagnosis systems. Specifically, level set methods have emerged as a general framework for image segmentation; such methods are mainly based on gradient information and provide satisfactory results. However, the noise inherent to images and the lack of contrast information between adjacent regions hamper the performance of the algorithms, thus, others proposals have been suggested in the literature. For instance, characterization of regions as statistical parametric models to handle level set evolution. In this paper, we study the influence of texture on a level-set-based segmentation and propose the use of Hermite features that are incorporated into the level set model to improve organ segmentation that may be useful for quantifying left ventricular blood flow. The proposal was also compared against other texture descriptors such as local binary patterns, Image derivatives, and Hounsfield low attenuation values.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Segmentation of knee cartilage has been useful for opportune diagnosis and treatment of osteoarthritis (OA). This paper presents a semiautomatic segmentation technique based on Active Shape Models (ASM) combined with Local Binary Patterns (LBP) and its approaches to describe the surrounding texture of femoral cartilage. The proposed technique is tested on a 16-image database of different patients and it is validated through Leave- One-Out method. We compare different segmentation techniques: ASM-LBP, ASM-medianLBP, and ASM proposed by Cootes. The ASM-LBP approaches are tested with different ratios to decide which of them describes the cartilage texture better. The results show that ASM-medianLBP has better performance than ASM-LBP and ASM. Furthermore, we add a routine which improves the robustness versus two principal problems: oversegmentation and initialization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Training over-complete dictionaries which facilitate a sparse representation of the image leads to state-of-the-art results in compressed sensing image restoration. The training sparsity should be specified when training, while the recovering sparsity should also be set when image recovery. We find that the recovering sparsity has significant effects on the image reconstruction properties. To further improve the compressed sensing image recover accuracy, in this paper, we proposed a method by optimal estimation of the recovering sparsity according to the training sparsity to control the reconstruction method, and better reconstruction results can be achieved successfully. The method mainly includes three procedures. Firstly, forecasting the possible sparsity range by analyzing a large test data set to obtain a possible sparsity set. We find that the possible sparsity is always 3~5 times the training sparsity. Secondly, to precisely estimate the optimal recovering sparsity, we choose only several samples randomly from the compressed sensing measurements and using the sparsity candidates in the possible sparsity set to reconstruct the original image patches. Thirdly, choosing the sparsity corresponding to the best recovered result as the optimal recovering sparsity to be used in image reconstruction. The estimation computational cost is relatively small and the reconstruction result can be much better than the traditional method. The experimental results show that, the PSNR of the recovered images adopting our estimation method can be higher up to 4dB compared to the traditional method without the sparsity estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The refractive indices versus wavelength of optical transparent glasses are measured at a few wavelengths only. In order to calculate the refractive index at any wavelength, a so-called Sellmeier series is used as an approximation of the wavelength dependent refractive index. Such a Sellmeier representation assumes an absorbing free (= loss less) material. In optical transparent glasses this assumption is valid since the absorption of such transparent glasses is very low. However, optical filter glasses have often a rather high absorbance in certain regions of the spectrum. The exact description of the wavelength dependent function of the refractive index is essential for an optimized design for sophisticated optical applications. Digital cameras use an IR cut filter to ensure good color rendition and image quality. In order to reduce ghost images by reflections and to be nearly angle independent absorbing filter glass is used, e.g. blue glass BG60 from SCHOTT. Nowadays digital cameras improve their performance and so the IR cut filter needs to be improved and thus the accurate knowledge of the refractive index (dispersion) of the used glasses must be known. But absorbing filter glass is not loss less as needed for a Sellmeier representation. In addition it is very difficult to measure it in the absorption region of the filter glass. We have focused a lot of effort on measuring the refractive index at specific wavelength for absorbing filter glass – even in the absorption region. It will be described how to do such a measurement. In addition we estimate the use of a Sellmeier representation for filter glasses. It turns out that in most cases a Sellmeier representation can be used even for absorbing filter glasses. Finally Sellmeier coefficients for the approximation of the refractive index will be given for different filter glasses.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Energetic sensitivity of a system with optical equisignal zone is considered in the paper. Energetic sensitivity is a criterion for choosing components of such a system and determines its potential accuracy. The term of energetic sensitivity is revised for a spectrum and spectral response of the sensor is taken into account. New method of evaluating of the position of optical equisignal zone is proposed. The method is based on evaluation of the position on the CMOS array. Digital method of signal processing is compared to analog one. Digital method has some advantages over analog in functionality but still yield to analog in speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.