KEYWORDS: Thermography, RGB color model, Visualization, Education and training, 3D image processing, Cameras, Temperature distribution, Super resolution, Gallium nitride, Point clouds
In this study, we propose a new framework to perform visual simultaneous localization and mapping (SLAM) with RGB images artificially generated from thermal images in low light environments where an optical camera cannot be applied. We applied contrastive unpaired translation (CUT) and enhanced generative adversarial network for super-resolution (ESRGAN), which are image translation methods to generate a clear realistic RGB image from a thermal image. Oriented FAST and rotated BRIEF (ORB)-SLAM was performed using the super-resolution fake RGB image to generate a 3D point cloud. Experimental results showed that our thermography-based visual SLAM could generate a 3D temperature distribution map in the low light environment.
In this research, we propose the 3D measurement system combining structured light and speckle based pose estimation by introducing two different setting cameras. The proposed system consists of two lasers, spot laser and line laser, and two cameras, with and without lens, which can obtain both focused and defocused images at once. Local shapes are measured using focused images by a structured light method. 3D positions of points projected by laser are calculated by triangulation. Pose changes are estimated from speckle information using defocused images. Displacements of speckle patterns are detected as optical ow by Phase Only Correlation (POC) method. Pose changes are estimated from speckle displacements by solving equations derived from the physical nature of speckle. The target shape as a whole is reconstructed by integrating local shapes into common coordinates using estimated pose changes. In the experiment, the texture-less at board was measured with motion. From the experimental results, it is confirmed that the shape of the board was reconstructed correctly by the proposed 3D measurement system.
In this research, we propose a novel distortion-resistant visual odometry technique using a spherical camera, in order to provide localization for a UAV-based, bridge inspection support system. We take into account the distortion of the pixels during the calculation of the 2-frame essential matrix via feature-point correspondences. Then, we triangulate 3D points and use them for 3D registration of further frames in the sequence via a modified spherical error function. Via experiments conducted on a real bridge pillar, we demonstrate that the proposed approach greatly increases the accuracy of localization, resulting in an 8.6 times lower localization error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.