Remotely piloted aircraft systems (RPAS) have introduced a new ability to quickly deploy low-cost, fully or partially autonomous aerial sensor platforms, which has created new intelligence, surveillance, and reconnaissance capabilities in various domains using cameras that are ubiquitous in most RPAS. The mounted cameras acquire images or full-motion video (FMV) which can be analyzed using object detection algorithms for locating and classifying one or more specified targets. To date, there has not been much published work regarding the effect of the RPAS flight parameters on the performance of object detection algorithms. To explore the use of object detection on aerial FMV acquired at various RPAS flight parameter settings, a dataset acquisition campaign was launched resulting in 8.5 h of RPAS-acquired FMV. Analysis and interpretation of the acquired dataset revealed that state-of-the-art performance was achieved using a modified you only look once object detection algorithm when the RPAS was deployed under an altitude of 30 m, at a velocity of under 7 m/s, and at pitch angles ranging from 25 deg to 65 deg while acquiring FMV at a resolution of 4.16 MP. The experimental results show that, when flown under specific conditions, RPAS are an effective and reliable platform for acquiring aerial FMV for the purpose of object detection which has a variety of different applications, such as peace support, public safety, and aerial monitoring.
The next generation of multi-domain airborne platforms will provide military operators with unparalleled sensor data streams spanning video, radar, and other sensor inputs. These expanded sensor capabilities will substantially increase access to critical, near-real time surveillance. However, the process of interpreting video feeds places a significant burden on intelligence operators, a demand that can be addressed by AI-based algorithms. AI-based tools complement the aerial footage processing tasks performed by full motion video analysts. In this work, we introduce a new aerial pattern of life dataset and describe our latest algorithmic developments, which uses deep learning to gain an understanding of a scene’s patterns of life. This approach allows anomalies, outliers from standard patterns of life, to be identified using supervised and unsupervised learning approaches. Herein, we describe our deep learning models and our corresponding microservices software architecture. The patterns of life and anomaly detection performance is measured through analysis of video from this new remotely piloted aerial system (RPAS) flight campaign dataset.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.