Flight planning in support of multi-domain operations is a complex and manpower intensive process. Multiple factors such as changing environmental conditions, navigational data and constraints, air-traffic procedures and policies, maturation of routes and coordination with all stakeholders contribute to this complexity. These changing constraints result in multiple updates and deviations to the flight plans which eventually impact mission objectives and timing. This work applies state-of-the-art machine learning technologies to automatically determine the quality of generated flight plans to enable rapid verification and approval for plan filing. It also identifies preferred routes that are input into planner that results in higher flight plan acceptance rates. We leverage voluminous sources of real operational flight planning data and filed planning data to develop the models. Supervised learning is leveraged to predict whether a generated plan will be filed or rejected. The unsupervised learning component identifies flight plan preferences for feedback to the planner system. These models could be eventually deployed in a flight planning operational environment to reduce human effort, cost, and time to generate flyable plans. The results from this work could also be used to improve planner rules and search algorithms.
We have developed a system that applies deep learning application super-resolution (SR) to multispectral and hyperspectral geospatial satellite imagery to deduce higher resolution images from lower resolution images while maintaining the original color of the lower resolution pixels. A super-resolution model, which uses Deep Convolution Neural Networks (DCNNs), is trained using individual image bands, a large crop size or tile size of 512 × 512 pixels, and a de-noise algorithm. Applying our algorithms to maintain the original color of the image bands improves the quality metrics of the super-resolution images as measured by peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) of super-resolution images. One of the most important applications of satellite images is to automatically detect small objects such as vehicles and small boats. With super-resolution images generated by our system, the object detection accuracy (recall and precision) has improved by 20% with Planet® multispectral satellite images.
We explore the application of single image super-resolution technique to satellite image and its effect on object detection performance. This technique uses a deep convolutional neural network to learn transformations between different zoom levels of image pyramids, also referred to as Resolution Set (Rset). The network can learn the transformations from the 2:1 RSet at a Ground Sample Distance (GSD) of 60cm to the full resolution image at a GSD of 30cm by minimizing the differences between ground-truth full resolution and the derived 2x zoom. After training, the learned transformation is applied to the 1:1 full resolution image transforming the pixels to 2x resolution. The learned transformations has intelligence built in and can infer higher resolution images. We find super-resolution images significantly improve object detection accuracy, improve manual feature extraction accuracy, and also benefit imagery analysis workflows and derived products which use satellite images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.