Unmanned aerial vehicles (UAV) are now used in a large number of applications. In order to accomplish autonomous navigation, UAVs must be equipped with robust and accurate localization systems. Most localization solutions available today rely on global navigation satellite systems (GNSS). However, such systems are known to introduce instabilities as a result of interference. More advanced solutions now use computer vision. While deep learning has now become the state-of-the-art in many areas, few attempts were made to use it for localization. In this paper, we present an entirely new type of approach based on convolutional neural networks (CNN). The network is trained with a new purpose-built dataset constructed using publicly available aerial imagery. Features extracted with the model are integrated in a particle filter for localization. Initial validation using real-world data, indicated that the approach is able to accurately estimate the localization of a quadcopter.
Unmanned aerial vehicles have become widespread in today’s world and are used for applications ranging from real estate marketing and bridge inspection to defense and military applications. These applications have in common some form of autonomous navigation that requires a good localization capability at all time. Most UAV are using a combination of global navigation satellite systems (GNSS) and inertial measurement unit (IMU) to perform this task. Unfortunately, GNSS are subject to signal unavailability and all sorts of interference impeding on the ability of the UAV to self-localize. In this paper, we propose a new algorithm to perform localization in GNSS-denied environments by using a relative visual localization technique. We developed a new measure based on the use of local feature points extracted with ORB to estimate the likelihood of a previously captured image to have been taken in a position close to the current UAV location. The measure is embedded in a particle filter in which IMU data is used in order to reduce the number of images we need to analyze to perform localization. The resulting method have shown significant improvement in both accuracy and execution time in comparison to previous approaches.
In the last decade, research was conducted to develop measurement solutions dedicated to forest fires and based on image processing and computer vision. Significant progress was achieved in developing such tools for fire propagation in controlled laboratory environments. However, these developments are not suitable for outdoor unstructured environments. Additionally, wildland fires cover large areas; this limits the use of vision-based ground systems. Unmanned Aerial Vehicles (UAV) with cameras for remote sensing are promising as their performance/price ratio is increasing over time. They can provide a low-cost alternative for the prevention, detection, propagation monitoring and real-time support for fire fighting. In this paper, we give an overview of past work dealing with the use of UAVs in the context of wildland and forest fires, and propose a framework based on cooperative UAVs and UGVs for fires monitoring on a larger scale.
Most of today's UAVs make use of multi-sensor GNSS/INS fusion for localization during navigation. In such a context GNSS systems are used as a compact and cost-effective way to constrain the unbounded error induced by the INS sensors on the localization. Unfortunately, GNSS systems have been proven to be unreliable in multiple contexts. The drawback of such an approach resides in the radio communications necessary to acquire the localization data. Radio communication systems are prone to availability problems in some environments, to signal alteration and to interference. The root cause of the problem resides in the use of global information to solve a local problem. In this work, we propose the use of local visual information to perform relative localization in an unknown outdoor environment. The algorithm uses feature point methods to extract salient points from a set of images pertaining to possible matches during the navigation. The extracted features are matched with available visual data stored during previous navigation or from an aerial view map. Different feature extraction techniques were analyzed, and ORB was the one that gave the best mean absolute error. The estimated distance between the best match and ground-truth localization was within 70 meters on average at an altitude of 150 meters. Experimental tests were conducted on outdoor videos captured using a quadcopter. The obtained results are promising and show the possibility of using relative visual data in GPS/GNSS-denied environments to improve the robustness of UAVs navigation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.