Image dehazing technology is a hot topic in the fields of image processing and computer vision, aiming to obtain details and texture features of the original scene from foggy images, and then obtain clear and fog free images. Most of the existing research methods are suitable for tasks in low fog scenarios. As the fog concentration increases, the image reconstruction quality of the algorithm significantly decreases, accompanied by detail loss and distortion. In addition, most existing algorithms require a large amount of foggy datasets during model training, and model training takes a long time, which reduces the practicality of the model. In response to the above issues, this paper proposes an image dehazing model based on a small sample multi attention mechanism and multi frequency branch fusion (MFBF-Net). This model can effectively extract high-frequency and low-frequency detail information in the image, and reconstruct the real image to the greatest extent possible. The experimental results show that the dehazing model proposed in this paper exhibits good dehazing performance on small sample datasets, and has good performance in different concentrations of foggy scenes.
Non-line-of-sight(NLOS) imaging through fog has been extensively researched in the fields of optics and computer vision. However, due to the influence of strong backscattering and diffuse reflection generated by the dense fog on the temporal-spatial correlations of photons returning from the target object, the reconstruction quality of most existing methods is significantly reduced under dense fog conditions. In this study, we define the optical imaging process in a foggy environment and propose a hybrid intelligent enhancement perception(HIEP) system based on Time-of-Flight(ToF) methods and physics-driven Swin transformer(ToFormer) to eliminate scattering effects and reconstruct targets under heterogeneous fog with varying optical thickness. Furthermore, we assembled a prototype of the HIEP system and established the Active Non-Line-of-Sight Imaging Through Dense Fog(NLOSTDF) dataset to train the reconstruction network. The experimental results demonstrate that even in dense fog short-distance scenarios with an optical thickness of up to 2.5 and imaging distances less than 6 meters, our approach achieves clear imaging of the target scene, surpassing existing optical and computer vision methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.