Flight planning in support of multi-domain operations is a complex and manpower intensive process. Multiple factors such as changing environmental conditions, navigational data and constraints, air-traffic procedures and policies, maturation of routes and coordination with all stakeholders contribute to this complexity. These changing constraints result in multiple updates and deviations to the flight plans which eventually impact mission objectives and timing. This work applies state-of-the-art machine learning technologies to automatically determine the quality of generated flight plans to enable rapid verification and approval for plan filing. It also identifies preferred routes that are input into planner that results in higher flight plan acceptance rates. We leverage voluminous sources of real operational flight planning data and filed planning data to develop the models. Supervised learning is leveraged to predict whether a generated plan will be filed or rejected. The unsupervised learning component identifies flight plan preferences for feedback to the planner system. These models could be eventually deployed in a flight planning operational environment to reduce human effort, cost, and time to generate flyable plans. The results from this work could also be used to improve planner rules and search algorithms.
One of the inputs to a flight planning system is human-generated Notice to Airmen (NOTAM) entries that are used to alert flight crews of potential hazards that may be encountered when flying a specific mission. The text descriptions contained within the NOTAM are heavily abbreviated domain-specific free text, not standardized, and can vary widely depending on how the issuer chooses to describe the situation. Automated flight planning systems (autoplanners) do not parse the text description to determine accuracy or viability. Instead, a 4-letter code (known as a Q-Code) is entered and recorded by the human issuer to aid in interpreting the NOTAM contents. If Q-Codes are improperly entered, wrong, incomplete, or not specific to the issue being described, NOTAMs may be erroneously interpreted and the autoplanner may generate sub-optimal flight plans. As a solution to the problem, we developed machine-learning based text classification models where the inputs are NOTAM text descriptions and the outputs are predicted Q-codes for each text description. Such a solution would make autoplanner systems more robust by verifying and correcting incorrect NOTAMs automatically.
The use of deep learning in multi-domain operations to analyze satellite imagery is becoming particularly important. As deep learning models are computationally expensive to train and require vast amounts of data, there is an increasing trend towards the outsourcing of model training to the cloud, relying on pre-trained models and use of third party datasets. This poses serious security challenges and exposes users to adversarial attacks that aim to disrupt the training pipeline and insert Trojan behavior (backdoors) into the AI system. In this work, we demonstrate a method based on Generative Adversarial Networks (GANs) to automatically detect Trojans in deep learning computer vision models with a high detection accuracy (89%). We pick a land usage classification problem on satellite imagery for this demonstration. These results can easily be extended to other computer visons problems such as object detection. This technique is agnostic to the internal architecture of the deep learning network in question. We make no hard assumptions about the nature of the Trojan - size or pattern of the trigger, the targeted classes and the method of trigger injection.
We explore the application of single image super-resolution technique to satellite image and its effect on object detection performance. This technique uses a deep convolutional neural network to learn transformations between different zoom levels of image pyramids, also referred to as Resolution Set (Rset). The network can learn the transformations from the 2:1 RSet at a Ground Sample Distance (GSD) of 60cm to the full resolution image at a GSD of 30cm by minimizing the differences between ground-truth full resolution and the derived 2x zoom. After training, the learned transformation is applied to the 1:1 full resolution image transforming the pixels to 2x resolution. The learned transformations has intelligence built in and can infer higher resolution images. We find super-resolution images significantly improve object detection accuracy, improve manual feature extraction accuracy, and also benefit imagery analysis workflows and derived products which use satellite images.
Conference Committee Involvement (1)
Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications VII
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.