Non-line-of-sight (NLOS) imaging, which utilizes weak photons that diffusely reflect from the visible surfaces (e.g., diffuse walls), can reconstruct hidden objects around the corner. Recently, lots of non-line-of-sight imaging methods have been proposed, such as time-of-flight (ToF)-based methods, coherence-based methods, and intensity-based methods. However, most of these methods are time-consuming for data acquisition and have poor robustness in the reconstruction process. In this paper, the novel application of Generative Adversarial Network is introduced to NLOS imaging. A robust, real-time NLOS imaging method based on autocorrelation mapping Generative Adversarial Network (AMGAN) is proposed, which reconstructs hidden scenes by learning the autocorrelation mapping from speckle-autocorrelation to the hidden target. In order to train the proposed AMGAN, we also analyze the principles of speckle-autocorrelation NLOS imaging and the noise model of the imaging process. Then a speckle-autocorrelation NLOS imaging dataset SANLOS is synthesized in this paper. Finally, our method is compared with other methods based on deep learning quantitatively and qualitatively. The experimental results demonstrate that the proposed approach achieves better NLOS reconstruction quality and is more robust under different exposure times compared with state-of-art methods
Hashing technology is widely used to solve the problem of large-scale Remote Sensing (RS) image retrieval due to its high speed and low memory. Among the existing hashing algorithm, the unsupervised method is widely used in largescale RS image retrieval. However, the existing unsupervised RS image retrieval methods do not consider the multichannel properties of multi-spectral RS images and the discriminability in the local preservation mapping process adequately, which make it difficult to satisfy the retrieval performance of RS data. To solve these problems, we propose an unsupervised Variational Auto-Encoder Hashing algorithm based on multi-channel feature fusion (VAEH). MultiChannel Feature Fusion (MCFF) is used to extract the feature information of image, which fully considers the multichannel properties of the multi-spectral RS image. In order to enhance the discriminability in the local preservation mapping process, variational construction process and automatic encoder are added into the learning process of hashing function, and the KL distance of the Variational Auto-Encoder (VAE) is used to constrain the hashing code. Experiments on two large public RS image data sets (i.e. SAT-4 and SAT-6) have shown that our VAEH method outperforms the state of the art.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.