KEYWORDS: LIDAR, Video, Super resolution, 3D applications, Detection and tracking algorithms, Image sensors, Image fusion, Detector arrays, Deep learning, 3D modeling
We present a statistical model for the multiscale super-resolution of complex 3D single-photon LiDAR scenes while providing uncertainty measures about the depth and reflectivity parameters. We then propose a generalization of this model by unrolling its iterations into a new deep learning architecture which requires a reduced number of trainable parameters, and provides rich information about the estimates including uncertainty measures. The proposed algorithms will be demonstrated on two specific applications: micro-scanning with a 32 × 32 time-of-flight detector array, and sensor fusion for high-resolution kilometer-range 3D imaging. Results show that the proposed algorithms significantly enhance the image quality.
Picosecond resolution time-correlated mode has emerged as a candidate technology for a variety of depth imaging applications in the visible, near-infrared and short-wave infrared regions. This presentation will examine this approach in a range of challenging sensing scenarios including: imaging though highly scattering underwater conditions; free-space imaging through obscurants such as smoke or fog; and depth imaging of complex scenes containing multiple surfaces.
Sub-pixel micro-scanning is a relatively simple way of utilizing a low pixel count sensor to better realise the resolution capabilities of a given objective lens. This technique accomplishes this by shifting the sensor array in the image plane through distances less than the pixel dimensions, gathering multiple images from different viewpoints that can be combined into a single, more detailed image. Applying this technique to a single-photon counting light detection and ranging (LiDAR) system allows for improved depth and intensity image reconstruction. Time-correlated single-photon counting (TCSPC) allowed for time-of-flight data to be measured, and the high-sensitivity and picosecond timing resolution this provided enabled us to create high-resolution intensity images and depth maps from distant targets whilst maintaining low average optical output power levels. The LiDAR system operated at a wavelength of 1550 nm, and used a pulsed fiber laser source for flood-illumination of the target scene. The detector was a 32 × 32 InGaAs/InP single-photon avalanche diode detector array mounted on precision translation stages. Operating in the short-wave infrared meant that the system could work at long range in daylight conditions, as the effect of solar background is reduced compared to shorter wavelengths and atmospheric transmission was relatively high. This paper presents depth and intensity profiles taken at a target range of approximately 325 m from the system location. The transceiver system operated at eye-safe, low average optical output power levels, typically below 5 mW.
Sub-pixel micro-scanning is used to increase the sampling of an imaging system, by taking multiple images of a scene from different sub-pixel locations and combining them into a single composite image using an advanced image processing algorithm. We have applied this method to a single-photon light detection and ranging system based on the time-correlated single-photon counting technique, operating at a wavelength of 1550 nm. Using a 32 × 32 single-photon detector array, the sub-pixel micro-scanning method allowed for composite depth maps and intensity profiles of up to 512 × 512 pixels for targets at distance of 325 meters. We will discuss the effect of micro-scanning on image enhancement.
We present a method of improving the spatial resolution of a single-photon counting light detection and ranging system using a sub-pixel micro-scanning approach. The time-correlated single-photon counting technique was used to measure photon time-of-flight from remote objects. The high-sensitivity and picosecond timing resolution of this approach allows for high-resolution depth and intensity information to be obtained from targets with very low average optical power levels. The system comprised a picosecond pulsed laser source operated at a wavelength of 1550 nm and a 32 × 32 InGaAs/InP single-photon avalanche diode detector array. The detector array was translated along two orthogonal axes in the image plane of the receive channel objective lens using two computer-controlled motorized translation stages. This allowed for sub-pixel scanning, resulting in a composite image of the scene with improved spatial resolution. This paper presents preliminary measurements of depth and intensity profiles taken at stand-off distances of approximately 2.5 meters in laboratory conditions using average optical power levels in the micro-watt regime. A standard test chart was used to evaluate the resolving power of the system for both standard and micro-scanned images to assess performance improvements in spatial resolution. Depth profiles of targets were also obtained to investigate improvements in resolving small details and the quality of target edges.
Photonic techniques emulating the brain’s powerful computational capabilities are attracting considerable research interest as these offer promise for ultrafast operation speeds. In this talk we will review our approaches for ultrafast photonic neuronal models based upon Semiconductor Lasers, the very same devices used to transmit internet data traffic over fiber-optic telecommunication networks. We will show that a wide range of neuronal computational features, including spike activation, spiking inhibition, bursting, etc., can be optically reproduced with these devices in a controllable and reproducible way at sub-nanosecond time scales (up to 9 orders of magnitude faster than the millisecond timescales of biological neurons). Moreover, all our results are obtained using off-the-shelf, inexpensive Vertical-Cavity Surface Emitting Lasers (VCSELs) emitting at 1310 nm and 1550 nm; hence making our approach fully compatible with current optical communication technologies. Further, we will also introduce our recent work demonstrating the successful communication of sub-nanosecond spiking signals between interconnected artificial VCSEL-based photonic neurons and the potential of these systems for the ultrafast emulation of basic cortical neuronal circuits. These early results offer great prospects for novel neuromorphic (brain-like) photonic networks for brain-inspired ultrafast information processing systems going beyond traditional digital computing platforms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.