Cloud detection is important for the application of space-borne video remote sensing. Video data of Chinese Jilin-1 is detected through migration learning and improved Unet with fully connected conditional random field. Due to the interference of fast movement of cloud targets and satellite platform jitter in video satellite remote sensing, it is difficult for Unet network depth to effectively extract the context characteristics of cloud targets, and effect of segmentation and cloud detection is poor. To solve the problem of missing cloud target extraction features, this paper uses the VGG16 pretraining model as the backbone network of the context path, and refines the segmentation results using the fully connected conditional random field (Fully Connected / Dense CRF) to improve cloud boundary pixel localization. The results show that the proposed algorithm can effectively improve the model segmentation accuracy, where the accuracy and intersection ratio reach 92.6% and 90.9%. The proposed network has strong generalization and high practical application value.
The solar-blind ultraviolet (UV) and visible (VIS) imaging system provides a valuable tool for search and rescue missions. However, due to atmospheric scattering and absorption effects, the UV images are significantly degraded, even missing the target in some frames. A framework based on a weighted mask, with three schemes suitable for various imaging conditions is proposed. Compared with traditional methods, this framework not only preserves low-intensity target regions but also highlights and tracks any suspicious target. Scheme 1 enhances the signal-to-noise ratio (SNR) by computing the accumulating weights of sequential frames, supporting temporal and average weighting means. The temporal weighting serves as a traditional recursive temporal filtering method, which has an effect similar to that of average weighting. Scheme 2 mitigates small platform drifts by introducing a Kalman filter. Scheme 3 mitigates large platform perturbations by eliminating interference from a moving background, which is achieved by determining the warping relationship from adjacent VIS frames. The experiments are designed to cover as many situations as possible, including low-SNR imaging on a static platform, high-SNR imaging on a flat-flying small drone, and strong/weak complex target imaging on a hovering platform. The experiments assess the proposed methods and validate their predicted performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.