In recent years, there has been a significant focus on using agile modes in high-resolution optical satellites. These modes, such as active push-broom, multi-angle stereo imaging, and non-along-track imaging, take advantage of the satellite's maneuverability to improve the balance between spatial resolution and time resolution, thus enhancing imaging efficiency. However, compared to traditional passive push-broom imaging, these methods can result in a decrease in image quality. For instance, a drift angle can cause a shift in the image's nadir direction, leading to a reduction in the modulation transfer function (MTF). While satellites have measures to correct for drift angles, the impact of drift angle on wide-field optical remote sensing satellite imaging cannot be ignored, especially during active push-broom imaging processes that involve large attitude maneuvers and low precision in satellite attitude control. The residual effect of correcting for drift angle varies across different points in the full field of view under different non-along-track imaging conditions. Generally speaking, the magnitude of residuals is directly proportional to the angle of the track and the proximity to the edge of the field of view. A quantitative analysis model has been developed to evaluate the imaging quality degradation under various non-along-track conditions for different parameter designs. This model takes into account the distribution of residuals after correcting for the drift angle, which causes an uneven decrease in MTF across the field of view. It enables the selection of optimal parameter combinations for multiple imaging parameters and task planning, ensuring that the imaging quality in the target area meets user requirements. Using the orbital elements and a set of angles and starting point latitude and longitude between 0° and 330° spaced 30° apart for non-along-track imaging tasks, 13 unequally spaced sampling points in the full field of view were selected to simulate the residual drift angle under typical non-along-track imaging conditions. The residual drift angle distribution map was drawn in each image and further, the MTF (Modulation Transfer Function) map was calculated. These maps represent the distribution of image quality within the images and validate the effectiveness of the quantitative analysis model. The analysis results are valuable for ensuring the imaging quality of high-resolution optical satellites during agile imaging and can be further extended to develop image-quality-oriented agile mission planning methods and other applications.
Solar-induced chlorophyll fluorescence (SIF) is a weak optical signal emitted by chlorophyll under natural illumination. SIF ranges from 600 nm to 800 nm and is assumed as a direct proxy for actual photosynthesis. Due to recent advances in spectroscopy and retrieval techniques, SIF can be retrieved from hyperspectral remote sensing data. Statistical-based approach, typically the singular value decomposition (SVD) method, is one of the two practical strategies for SIF retrieval. A statistical-based approach collects SIF-free measurements of Fraunhofer Lines as training dataset, extracts their spectral features by a statistical approach and then applies the extracted features in the forward SIF retrieval model. In this paper, we first evaluated the performance of the SVD approach in SIF retrieval at proximal scale. Good consistency was found between diurnal SIF cycles given by the SVD method and a 3-FLD method, with SVD-based SIF values higher than those given by 3-FLD. We then applied the SVD method on HyPlant imaging spectroscopy airborne data. Spatial distribution of SIF was successfully depicted using the SVD method. SIF was in a good spatial accordance with NDVI, but the former exhibited a stronger heterogeneity. For both proximal and airborne scales, the in-filling of the Fraunhofer Lines by SIF was successfully detected by the SVD method. However, whether SVD could induce a systematic error should be further studied. It can be concluded that a statistical-based SIF retrieval method is a reasonable alternative to traditional O2-lines-based methods, especially when synchronous SIF-free spectrum or pixelwise atmospheric correction is unavailable.
Scene classification shows pivotal role in remote sensing image researches. Since challenges of large similarity between classes, high diversity in each class and huge variations in background, spatial resolution, translation, etc., remote sensing image scene classification still urgently need development. In this paper, we propose a novel method named deep combinative feature learning (DCFL) to extract low-level texture and high-level semantic information from different network layers. First, feature encoder VGGNet-16 is fine-tuned for subsequent multi-scale feature extraction. And two shallow convolutional (Conv) layers are selected for convolutional feature summing maps (CFSM), from which we extract uniform LBP with rotation invariance to excavate detailed texture. Deep semantic features from fully-connected (FC) layer concatenated with shallow detailed features constitute deep combinative features, which are thrown into support vector machine (SVM) classifier for final classification. Extensive experiments are carried out and results prove the comparable advantages and effectiveness of the proposed DCFL contrasting with different state-of-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.