Position information of unmanned aerial vehicles (UAVs) and objects is important for inspections conducted with UAVs. The accuracy with which changes in object to be inspected are detected depends on the accuracy of the past object data being compared; therefore, accurate position recording is important. A global positioning system (GPS) is commonly used as a tool for estimating position, but its accuracy is sometimes insufficient. Therefore, other methods have been proposed, such as visual simultaneous localization and mapping (visual SLAM), which uses monocular camera data to reconstruct a 3D model of a scene and simultaneously estimates the trajectories of the camera using only photos or videos.
In visual SLAM, UAV position is estimated on the basis of stereo vision (localization), and 3D points are mapped on the basis of the estimated UAV position (mapping). Processing is implemented sequentially between localization and mapping. Finally, all the UAV positions are estimated and an integrated 3D map is created. For any given iteration in the sequential processing, there will be estimation error, but in the next iteration, the previous estimated position will be used as a base position regardless of this error. As a result, error accumulates until the UAV returns to a location it passed before. Our research aims to mitigate this problem. We propose two new methods.
(1) Accumulated error caused by local matching with sequential low-altitude images (i.e. close-up photos) is corrected with global-matching between low- and high-altitude images. To perform global-matching that is robust against error, we implemented a method wherein the expected matching areas are narrowed down on the basis of UAV position and barometric altimeter measurements.
(2) Under the assumption that absolute coordinates include axis-rotation error, we proposed an error-reduction method that minimizes the difference in the UAVs’ altitude between the visual SLAM and sensor (bolometer and thermometer) results.
The proposed methods reduced accumulated error by using high-altitude images and sensors. Our methods improve the accuracy of UAV- and object-position estimation.