PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 11402, including the Title Page, Copyright Information, and Table of Contents.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past few years the integral-imaging, or lightfield, concept has been applied successfully to microscopy. More specifically, in the case of fluorescent samples integral, or lightfield, microscopy offers the advantage of capturing the 3D information in a single shot. Due to its potential utility integral microscopy is now facing many challenges, like improving the resolution and depth of field, the development and optimization of specially-adapted reconstruction algorithms, or the search of applications in which lightfield microscopy is superior to existing techniques. This contribution is devoted to review the recent advances of integral microscopy and enunciate the right questions about the progress of the technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We overview a previously reported three-dimensional (3D) polarimetric integral imaging method and algorithms for extracting 3D polarimetric information in low light environment. 3D integral imaging reconstruction algorithm is first performed to the originally captured two-dimensional (2D) polarimetric images. The signal-to-noise ratio (SNR) of the 3D reconstructed polarimetric image is enhanced comparing with the 2D images. The Stokes polarization parameters are measured and applied for the calculation of the 3D volumetric degree of polarization (DoP) image of the scene. Statistical analysis on the 3D DoP can extract the polarimetric properties of the scene. Experimental results verified the proposed method out performs the conventional 2D polarimetric imaging in low illumination environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new branch of research, dedicated to lightfield, has recently seen an important growth in the microscopist community and it is called integral or lightfield microscopy. One recent implementation of a lightfield microscope is the Fourier integral Microscope (FiMic). In this setup a microlens array (MLA) is placed at the Fourier plane of the objective lens, therefore, the sensor behind each microlens is capturing the spatial information of a different perspective of the sample. The spatio-angular information captured can used to reconstruct the 3D volume. A very wide field of research among microscopists is for objects that have an extremely low contrast or that are completely transparent. In order to obtain a 3D reconstruction of a transparent sample our work has been focused on the combination of the FiMic with a dark-field illumination. In this way a 3D reconstruction of phase objects is achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The light field concept can correctly and completely describe the distribution of rays in 3D space within theory of geometrical optics. However, the data quantity is huge, and not easy to capture or process. Though light field 3D displays are almost ideal in principle, they are not really practical given the huge number of pixels required. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses a characteristic of human vision. Though several cameras are needed, VELF can be captured by a camera array. Reconstructing the ray distribution involves linear blending, but this process is so simple that we can realize this calculation optically in the VELF3D display. It produces high image quality as its high pixel usage efficiency overcomes the tradeoff between resolution and directional density of rays. In this paper, we summarize the relationship between the characteristics of human vision and VELF. We introduce the VELF3D display that consists of a horizontal RGB stripe LCD panel and a parallax barrier, whose spacing width is almost the same as pixel pitch. Though it is similar to the conventional parallax barrier type autostereoscopic 3D display, it can reproduce correct rays for human vision. High feel of existence is induced by the display’s smooth and exact motion parallax; its resolution is high enough to display characters. Head tracking allows the viewing zone to be greatly expanded while maintaining smooth motion parallax. Since image capture and display are very simple, VELF is suitable for realtime live action applications with high image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A light field 3D display, which reconstructs a true 3D scene by reproducing the directional samples of the light rays apparently emitted by the 3D scene and viewed from slight different viewing points within a viewing window, is one of the few display architecture that can be implemented into a compact head-mounted form. A head-mounted 3D light field display (LF-HMD) is potentially capable of rendering correct or nearly correct focus cues and therefore addressing the well-known vergence-accommodation conflict problem in conventional stereoscopic displays for virtual and augmented reality applications. Recent years, efforts have been made to engineering this type of displays. In this presentation, we will review the recent progress in developing this display method and discuss existing challenges and opportunities.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel 3D display by combing an aerial image and an image on a contemporary flat-panel display. We design an optical system to form an aerial image in front of the flat-panel display by utilizing AIRR (aerial imaging by retro-reflection). The aerial image formed with AIRR is floating in mid-air and visible without special equipment like glasses. This paper proposes a new optical design of a two-layered display that consists of an aerial display and a flatpanel display. Our previously reported two-layered display employed a half mirror, which reduced luminance of the aerial image and the flat-panel image. This work improves luminance of both images by use of a reflective polarizer. The luminance of the aerial image and the flat-panel display image is about twice as bright as the previous system. Furthermore, we have performed DFD (depth-fused 3D) display between the aerial image and the flat-panel image. An aerial image is formed in front of an image on the flat-panel display with a little gap so that the aerial image looks overlapping on the flat-panel image at the central viewing position. Then, the two images were fused to a single image between the two layers. Experimental results on DFD perception show that the perceived depth is adjustable with the luminance ratio of the two images. Thus, we have succeeded in showing 3D images between the aerial image and the flat-panel display, which gives a new effect of a 3D image popping out of the flat-panel display.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We introduce three experiments on depth perception from monocular motion parallax for more realistic depth representation in 3D applications. Motion parallax is a physiological factor for depth perception. When moving the head position, depth can be perceived from motion parallax even when using one eye because motion parallax provides enough information for depth estimation. In our first experiment, we evaluated the perceived depth from monocular motion parallax with passive head movements. The results show that perceived depth with passive head movement is comparable to that with active head movement. For more realistic 3D scenes, we should clarify the critical factor of visual information in motion parallax. Therefore, in our second experiment, we evaluated the perceived depth from motion parallax with and without direction change of the stimulus movement. The results suggest that visual information at the time of direction change plays an important role in stable and unambiguous depth perception. In our third experiment, we evaluated the minimum duration of motion parallax stimulus with direction change for stable and unambiguous depth perception. The results indicate that a stimulus duration of only 15% of the total trial time provides stable and unambiguous depth perception if direction changes of the visual stimulus are presented. These findings can be applied to 3D applications using motion parallax.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Holographic replay traditionally involves back propagation of object field from the hologram plane to the 3D volume of interest where we expect the original object to be located. While this operation has traditionally been considered to “reconstruct” the 3D object we will show that the replay process is in fact a Hermitian transpose operation corresponding to the hologram formation process. Based on this understanding we develop a sparsity assisted iterative algorithm for true 3D reconstruction of the object from the object wavefront in the hologram plane. The inverse solution will be illustrated for the case of particle field holograms where the object volume consists of sparsely populated particles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In off-axis digital holographic microscopy, a camera records the spatial interference intensity pattern between light scattered from the specimen and the unperturbed reference light. Digital propagation using the numerical reconstruction algorithm allows both phase-contrast and amplitude-contrast images of the sample to be retrieved. This is possible when the exact distance between the image sensor (such as CCD) plane and image plane is provided. In this paper, we give an overview of our work on a deep-learning convolutional neural network with a regression layer as the top layer to estimate the best focus distance. The experimental results obtained using microsphere beads and red blood cells show that the proposed method can accurately estimate the propagation distance from a filtered hologram. This method can significantly accelerate the numerical reconstruction time since the correct focus is provided by the CNN model with no need for digital propagation at different distances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The field of nanophotonics is a fascinating branch of optics that has shown a whole new-perspective of light-matter interaction, or more specifically the controlling or ‘molding the flow’ of light. Periodic nanostructures in one, two, and three dimensions have been the active block of such light-manipulation, resulting in numerous applications and finally emerging as one of the interesting fields of the science community in the last decade. Such applications range from filtering, optical guiding, field enhancing, confinement/light-trapping optical computing, signal processing towards a whole set of parametric sensing that can affect the resonance mechanism associated with these nanophotonic structures. Although there are a series of fabrication approaches, the present review article covers the state of the art in ‘phase-controlled interferencelithography’ based fabrication technique for different application-oriented possibilities, utilizing the benefits of scalability and reconfigurability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Non-interferometric three-dimensional (3D) fluorescence imaging techniques for bio-applications are presented. For bioapplications, the efficient illumination is very important to avoid the damage of living cells. We have proposed a method to obtain the complex amplitude of the fluorescence light wave with partially coherence by using the transport of intensity equation (TIE). We will show some experimental results of the reconstructed fluorescence distributions of fluorescence beads and living plant cells by TIE.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In phase-shifting digital holographic microscopy (PS-DHM), the reconstructed phase map is obtained after processing several holograms of the same scene with a phase shift between them. Most reconstruction algorithms in PS-DHM require an accurate and known phase shift between the holograms, requirement that limits the PS-DHM applicability. This work presents an iterative-blind phase shift extraction method based on the demodulation of the different components of the holograms using three-frame holograms with arbitrary and unequal phase-shifts. Both simulated and experimental results demonstrate the goodness and feasibility of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We overview our recently published multi-dimensional integral imaging-based system for underwater optical signal detection. For robust signal detection, an optical signal propagating through the turbid water is encoded using multiple light sources and coded with spread spectrum techniques. An array of optical sensors captures video sequences of elemental images, which are reconstructed using multi-dimensional integral imaging followed by a 4D correlation to detect the transmitted signal. The area under the curve (AUC) and the number of detection errors were used as metrics to assess the performance of the system. The overviewed system successfully detects an optical signal under higher turbidity conditions than possible using conventional sensing and detection approaches.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
If the resolution of a compressively sensed image is not satisfactory then typically a new acquisition session with more samples needs to be set, and the reconstruction process needs to be run from scratch. Here we present a method to capture LiDAR images by progressively increasing the resolution of the 3D reconstructed image. The method prescribes the additional set of samples required to improve the resolution of a compressively sensed LiDAR. Then, a reconstruction procedure that uses the earlier captured coarser resolution 3D image and the additional samples is applied. The reconstruction process is realized by means of a specially designed deep neural network. This resolution refinement process is efficient in the sense that only the samples needed for the next higher resolution level are captured, and the resolution refinement is performed progressively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Quantitative phase imaging is widely studied for such as bio-imaging and industrial inspection. Quantitative phase imaging is divided into interferometric approach and non-interferometric intensity-based one. Interferometry often uses object and reference arms, rendering an optical setup complicated. The transport of intensity equation has been used for non-interferometric quantitative phase imaging. It allows to retrieve a phase distribution from through-focus series of an object. To obtain through-focus series, mechanical scanning of the object or a camera is required. This is not suitable for the quantitative phase imaging of dynamic phenomena. In this presentation, some of our proposed scan-less methods are presented. Numerical and/or experimental results are also shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new tool based on unsupervised learning to ensure the pictorial information recorded is as close as possible to the original by denoising the images produced, and thereby allowing to make more knowledgeable decisions. The algorithm is used to clean off-axis holograms. To denoise the acquired off-axis holograms, our technique takes advantage of the prior knowledge we have regarding the expected image and uses it to erase the noise, providing a significantly clearer image. We applied the technique to off-axis holograms of individual sperm cells acquired without labeling.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There has been great progress in the development of laser based projection display, fluorescence microscopy, digital holographic microscopy, and various quantitative phase microscopic (QPM) techniques for biological cells and tissues. We present the effect of partial spatial coherent illumination on speckle free laser projection imaging, homogeneous illumination for fluorescence microscopy, and speckle-free low phase noise QPM. We further notice that with the use a partially spatially coherent monochromatic light one can obtain improved image sharpness, reduction in coherent noise, improved phase noise and resolution in QPM. But at the same time the other remarkable properties of laser light, i.e., it’s high monochromaticity, high brightness (high degeneracy) and directionality are fully utilized in realizing all these techniques. Results of various digital holographic phase microscopic techniques, fluorescence microscopy, and projection display will be presented and results will be co
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The hallmark of digital holographic microscopy (DHM) is the ability to perform quantitative phase imaging (QPI) with speed, accuracy, high resolution, high temporal stability, and polarization sensitivity (PS). We have developed a DHM system based on a Fresnel biprism (FB) with the five above desired features. The system presented here provides accurate quantitative PS phase images and the reconstruction of real three-dimensional information in a simple, compact and cost-effective way. We have demonstrated the performance of our method experimentally by reconstructing quantitative calibrated phase images of standardized samples and of U87 human glioblastoma cells.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes optical systems to form a 3D shaped information screen with aerial imaging by retro-reflection (AIRR). AIRR employs three elements, which are a light source, a beam splitter, and a retro-reflector. The retro-reflector converges the retro-reflected lights to the plane-symmetrical position of the light source regarding the beam splitter. AIRR features a wide viewing angle, a large-size scalability, and a low cost with mass-productive process. Normally, a flat-panel display (FPD) panel is used for the light source. Therefore, the formed aerial image has a 2D plane shape. In this paper, a projector and a 3D shaped screen compose the light source. We have investigated optical arrangements to show a complex 3D screen in the mid-air for a wide viewing angle. The complex shape screen is surrounded by retro-reflectors, beneath which the projector is located. We have formed an aerial polyhedral image as well as an aerial curved information screen. Furthermore, we have investigated so-called hollow face illusion effect by showing an aerial 3D image. Conventionally, hollow face illusion has been reported that a concave mask of face is perceived as a normal convex face under observation with a single eye. We used a 3D shaped screen and projected a texture on it. This projection mapped 3D screen is used for the light source in the AIRR. We have formed a variety of aerial 3D images of which depth is inverted. We have succeeded in evoking the depth-inverted perception even under binocular viewing conditions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developed a new optical device and aerial imaging display system with that device. The system has features that the pop-out distance of the aerial image is long and there is no need for a space having the same distance as the pop-out distance on the back side of the optical device.
The optical device is a retro-reflective mirror array by arranging corner reflectors in strip shape. We have designed a layout of corner reflectors to image a real image at the designed depth by changing angles of the corner reflectors depending on the location. Thus, it has retroreflective characteristics in the horizontal direction of the optical device and specular reflection characteristics in the vertical direction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Immersion into virtual reality contents creates the satisfaction of persons enjoying them. Methods to cover users with screens or the display increases their immersion. However, conventional immersive displays isolate them from the surrounding people. In this study, to achieve both immersion of users and face-to-face communication between users and the audience, we propose a new optical system with aerial imaging by retro-reflection (AIRR). AIRR needs only the light source, the beam splitter and the retro-reflector. By placing these components in a Z-shape, the aerial images are formed in front of users, and the audience can see the virtual images of aerial images. Surrounding the user with this device can give them an immersive feeling with less blockage sense. This device can display still images and movies, depending on the light source such as LCD, LED display or light-field display. In combination with stereo cameras or IR sensors, users can manipulate aerial images surrounding to them. Without wearing special devices, our immersive aerial interface allows users and audience to share visual information in real time and work together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Points of view generation allows to create virtual views between two or more cameras observing a scene. This field receives attention from multimedia markets, because sufficiently realistic points of view generation should allow to navigate freely between otherwise fixed points of observation. The new views must be interpolated between sampled data, aided by geometrical information relating real cameras poses, objects in the scene and desired point of view. Normally there are several steps involved, globally known as Structure from Motion (SfM) method. Our study focuses on the last stage; image interpolation based on the disparities between known cameras. In this paper, a new method is proposed that uses depth maps generated by a single camera, named SEBI, allowing a more efficient filling in presence of occlusions. Occlusions are considered during interpolation, creating an occlusion-map and an uncertainty-map using the depth information that SEBI cameras provide.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper overviews new secret key sharing schemes which can be useful in optical cryptosystems based on double random phase encoding. These key sharing schemes, which are complex waveform versions of the Diffie-Hellman (DH) key exchange, allow us to establish the shared secret key in the Fourier optics-based cryptosystems. We show that the presented schemes can also be applied to cryptosystems in which multiple users can share a secret key securely.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Over the last years structured illumination digital holographic microscopy (SI-DHM) has been experimentally proved to double the resolution limit in conventional DHM. In SI-DHM, the underlying specimen is illuminated using a spatially varying structured illumination (SI) pattern, which enables super-resolution (SR) images to be retrieved using the proper computational reconstruction process. All these reconstruction methods require the acquisition of at least a couple phase-shifted DHM images. In particular, for a pure sinusoidal pattern, there is a need of recording two phase-shifted DHM images per orientation of the pattern (e.g., 6 images per isotropic SR improvement). Taking advantage of the simultaneous recording of the virtual (e.g., conjugated) image in the raw DHM image, here we present a novel computational method to reconstruct an isotropic SR image using one acquisition per pattern’s orientation (e.g. total 3 images per isotropic improvement). Because our proposed method shows a 50% reduction in the data acquisition and, therefore, acquisition time, we believe that our method should increase the utility of SI-DHM in live-cell imaging. We have validated our method using simulated and results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a case study for a time-of-flight (ToF) 3D imaging system using single-pixel imaging (SPI) approach based on compressive sensing (CS), accompanied by the Time-of-flight (ToF) principle applied to four reference points of the 2D image created and then mapped to the rest of the SPI generated virtual pixels. In this analysis we have developed a mathematical model of the system and evaluated three different scenarios considering different performance issues based on signal-to-noise ratio, different levels of background illumination, distance, spatial resolution, and material reflectivity presented by the objects in the scene. To be able to reduce the background photon shot noise and enable the correct functionality of the system also in harsh environments (in presence of micrometer size particles such as rain, snow, fog or smoke) we propose using near infra-red (NIR) active illumination with a peak wavelength of 1550 nm. The SPI principle is based here on an array of NIR emitting LEDs and the Thorlab FGA015 InGaAs single photodiode. For the system modelling and analysis, we considered the maximum background illumination intensity of up to 100 klux, different reflection coefficients of the target material to be detected, and measurement distances between 1 and 10 m. Using the ToF principle, we evaluated the direct ToF using both, pulsed laser NIR source as well as an array of NIR emitting LEDs combined with a single InGaAs photodiode on the one side, and an InGaAs single-photon avalanche diode (SPAD) on the other. Using the model developed, we estimated the spatial resolution (standard deviation from the distance measured) the proposed system might reach for each of the ToF methods analyzed and combining different system elements. Finally, we propose a SPI-ToF 3D imaging and ranging system for drone outdoors applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.