PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12733, including the Title Page, Copyright information, Table of Contents, and Conference Committee information.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we wish to explain the contradiction of quality assessments of pansharpening carried out at full and reduced spatial scales. It seems that at full scale, methods based on Component Substitution (CS) are quantitatively poorer than the other methods, but this depends on the intrinsic space varying misregistration between the two datasets. At reduced scale, the local shifts are divided by the MS-to-Pan scale ratio and thus they tend to vanish. The problem of full-scale quality indexes is that they were originally validated on aerial Multispectral (MS) data, with synthetic panchromatic (Pan) and thus total absence of misregistration. In the presence of local misregistration due to inaccurate information of the height of the imaged surface, CS methods locally align the lowpass MS components towards the sharpening Pan, thereby preserving the geometry of the scene; all the other methods produce fading contours because of shifts. The favorable property of CS, however, impacts against the (spectral) consistency property of Wald’s protocol, developed when the misalignments between MS and Pan was a small fraction of the pixel size, and hence negligible. In this perspective, methods that do not shift the original MS information are better, even though the visual quality of fading contours is worse. After exposing and explaining the contradiction between full- and reduced-scale assessments, we perform an in-depth analysis of the spectral and spatial consistency indexes of three widespread full-scale protocols: QNR, KQNR and HQNR. We investigate the robustness to shifts of all consistency indexes and propose to couple the spectral index and the spatial index that are least sensitive to shifts. In this way, the ranking of methods of reduced-scale assessments is preserved in full-scale assessments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, lunar exploration has become a hot spot in the world again. High-resolution lunar surface images are of great significance to lunar research, and at the same time are crucial to the safe landing of lunar probes. Due to the limitation of the orbital height and hardware, the resolution of the lunar remote sensing images is restricted, so it is particularly important to carry out super-resolution reconstruction of the lunar surface image. At present, most image super-resolution algorithms use a single fixed degradation model, such as using only bicubic interpolation algorithm for down-sampling, or adding specified blur, noise, etc. However, the real image degradation model is extremely complex and difficult to express with specific formulas, so this paper introduces a more complex degradation model when super-resolving the lunar image and simulates the complex degradation process in reality by adding more randomness. Secondly, this paper uses a deep learning network that combines a CNN network with residual structure and a transformer architecture for image super-resolution reconstruction, where the transformer architecture is used for deep feature extraction. The proposed method is experimented on Chang'e-2 7-meter resolution lunar surface remote sensing images, which verifies the effectiveness of the super-resolution algorithm proposed in this paper and outperforms the current popular methods in terms of visual effects and commonly used evaluation metrics. This work aims to improve the image clarity of the lunar surface in order to enhance the environment-awareness capability of the lunar probe and further improve its autonomous capability on the lunar surface.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The rapid pace of commercial space launches has drastically increased opportunities for novel small satellite missions. Though small satellites benefit from reduced launch costs, they are constrained by their Size, Weight, and Power (SWaP) limitations. The satellites in MyRadar’s HORIS (Hyperspectral Orbital Remote Imaging Spectrometer) constellation have SWaP constraints of a 1U CubeSat form factor (10 cm3 and ⪅ 1.33 kg), which place significant limits on mission design and duty cycle. In particular, downlinking large amounts of raw data, like that generated by the HORIS narrow band hyperspectral sensor, can be prohibitively bandwidth and power intensive. This study explores using deep learning inference to optimize onboard data processing and to mitigate the impacts of data volume on downlinking hyperspectral information from LEO (Low Earth Orbit). In particular, deep learning inference on the visible wavelength context imagery is used to constrain the aerosol model effects on reflectance retrieval calculations to convert at-sensor radiance measurements into surface (or cloud top) reflectance estimates. Also, a transfer learning approach utilizing an adversarially trained autoencoder to compress data from other satellites is used for HORIS data compression to reduce the required power and bandwidth for alerting use cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here we describe and mitigate issues of optical aberrations related to wavelength-dependent blur caused by miniaturization in hyperspectral pushbroom image sensors, which degrade spatial resolution and limit their usefulness. Panchromatic sharpening algorithms are modified to enhance the spatial resolution of acquired hyperspectral image cubes, explicitly focusing on optical aberrations associated with pushbroom scanners. The study explores algorithms for component substitution and multi-resolution analysis, such as modulation and Laplacian pyramids, adapting them for single image cubes while considering the characteristics of miniaturized hyperspectral imagers. Simulations demonstrate the method’s effectiveness in improving the spatial quality of images obtained from miniaturized hyperspectral imagers, expanding their potential applicability. The study evaluates the method using metrics like the BRISQUE score and utilizes data from the HYPSO-1 CubeSat platform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Air pollution is considered a very critical environmental risk to human health. The World Health Organization reports that it is responsible for almost seven million deaths. As so, motivation is enough to decrease population exposure. However, several unsolved issues that require additional research remain. In particular, despite global monitoring development, coverage is insufficient to accurately describe the spatial variability for specific pollutants within different areas. The TROPOshperic Monitoring Instrument mounted on Sentinel-5P is one of the satellite instruments that retrieve atmospheric pollutants’ concentration with a comparatively high spatial resolution, around 5 km. However, the spatial detail of the available products is often unsuitable for the purpose at hand. Also, physical constraints prevent enhancing the sensor’s nominal spatial resolution further. So, there is no alternative way to collect high-resolution information than through processing algorithms. In this research, we investigated the problem of super-resolving Sentinel-5P products by employing traditional and deep learning-based approaches. While the former do not require a training phase because they rely on simple physical models, the latter can attain higher performance by reproducing highly complicated models. However, the lack of high-resolution reference data makes the needed training phase of network parameters extremely challenging. In this paper, we studied different approaches tailored to the imagery at hand and evaluated their accuracy with Sentinel-5P data. This study provides insights into the techniques and how they should be employed to monitor air quality accurately. The results of this work give significant information for the development of suitable super-resolution algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Light detection and ranging (lidar) sensors are essential for state-of-the-art 3D perception for automated driving applications, as recent developments in the field have shown. To mitigate the risk of an unreliable object localization due to a distorted point cloud, high-precision intrinsic calibration is an important prerequisite to produce lidar sensors of high reliability. For large-scale series production, the factory calibration setup is required to be both space- and time-efficient. In this paper, we present a method for angular calibration that employs a two-dimensional calibration pattern as the core of our tabletop setup. To accelerate the calibration procedure, we perform a continuous measurement of the entire field of view without accumulation over several images or sub-resolution sampling. In our evaluation, we utilize two different calibration patterns, where we extract their center point using image processing techniques. The parameter describing the precision, is the standard deviation of the pattern’s center point over a sequence of images. This is the key criterion for determining the overall measurement uncertainty of our method and selecting the optimal pattern to realize a time-efficient intrinsic calibration on the subpixel level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of remote sensing, it is common to have image data which can be considered in some way to be incomplete. This may relate to missing information caused by sensor failures, cloud cover or partially overlapping data acquisitions. In each of these cases it is of interest to consider how best this data can be completed. Whereas previous work has employed techniques such as low-rank tensor completion to tackle this problem, we present a graph-based propagation algorithm which diffuses entries around the incomplete image tensors. We show this approach is robust in even extreme circumstances for which large regions of image data are missing and compare the quality of our completions against the state of the art. In addition to improved performance as measured by reduced errors versus ground truth in experiments we also provide a comparison of our method’s efficiency against benchmark methods and show that the approach is scalable as well as robust.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Autonomous navigation is an important area of research for aerial vehicles. Visual odometry and simultaneous localization and mapping algorithms are critical for the three-dimensional understanding of the environment. For that purpose, consistent multi-spectral maps of the environment should be generated. Existing pixel-based image registration methods are accurate but too slow to operate in real-time. Recently deep learning is used to develop feature-based data-driven methods for generating interest points and associated descriptors for registering multi-spectral image pairs. These methods are fast and perform better than existing methods for optical images. However, the results are less convincing for thermal image registration. In this work, we propose an improved multi-spectral homographic adaptation technique to generate highly repeatable ground truth interest points that are invariant across viewpoint changes in both spectra. These interest points are used to train the MultiPoint image registration network. Simulation results show that our improved model outperforms existing techniques for feature-based image alignment of optical and thermal images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the complex task of detecting and characterizing changes in dense Satellite Image Time Series (SITS). Although Change Vector Analysis (CVA) is widely used for Change Detection (CD), it has limitations due to missing prior information on changes, such as: optimal spectral channels and change timing. Time series data can help overcome these limitations, but working with them is challenging. To address these challenges, the paper introduces a novel framework called Time Series Change Vector Analysis (TSCVA), which builds upon the principles of CVA. In TSCVA, the paper redefines CVA in the time series feature space and introduces new definitions for change in time series magnitude and direction. This allows for a detailed analysis of change components in the time and spectrum domain within the SITS, enabling unsupervised CD. We utilize the expectation-maximization algorithm to estimate parameters of statistical distributions for change and no change classes. The effectiveness of the proposed TSCVA method is evaluated using Sentinel-2 time series data. The results, both quantitative and qualitative, confirm the robustness of this approach in effectively addressing the CD problem in dense SITS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The automatic detection of changes in remote sensing data is a topic that has been studied for decades. Basically, a necessary prerequisite for the actual step of change detection is always the accurate alignment of images acquired before and after the change - usually in the form of image registration and radiometric calibration. However, especially for very high-resolution images of urban areas image registration fails in the presence of significantly different viewing angles or different sensor technologies (e.g., optical and synthetic aperture radar (SAR)). This is due to due to the fact that elevated structures such as buildings, trees or masts exhibit a geometric distortion that is proportional to both the viewing angle as well as the object height. Therefore most existing remote sensing-based approaches for change detection are limited to images taken from the same - or at least a very similar - viewing angle and with the same sensor technology (e.g. change detection from optical to optical or from SAR to SAR). There are very few exceptions to this limitation. Regarding change detection with multiple sensors, most existing work focuses on the combination of medium-resolution sensors (e.g., Landsat and Sentinel-2 or Sentinel-1 and Sentinel-2). In these cases the data homogeneity problem is limited to the radiometric radiometric alignment, while geometric differences are negligible. To date, the automatic detection of changes in high-resolution images of urban areas, especially when taken by different sensors, is an open scientific challenge. In this work, we investigate, whether advances in deep learning-based single-image height reconstruction can provide a perspective for the detection of urban changes in a sensor- and viewing angle-independent way. The idea is to process both the pre- and the post-change images independently with a sensor-specific single-image height reconstruction model. Then, the reconstructed heights can be projected into a common map geometry. Changes in the scene are then theoretically represented by changes in the reconstructed heights. However, heights produced from single images are always prone to a significant amount of noise. Besides, different sensors, or observations from different viewing angles will lead to different occlusions. Thus, the change detection cannot be implemented in a conventional pixel-by-pixel manner. In order to avoid a high number of false positives, regularization has to be employed. We argue that openly available auxiliary data, e.g. building footprints extracted from the OpenStreetMap database can be used beneficially for this task.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Deep Learning for Image Classification and Regression
This work presents a multitemporal class-driven hierarchical Residual Neural Network (ResNet) designed for modelling the classification of Time Series (TS) of multispectral images at different semantical class levels. The architecture consists of a modification of the ResNet where we introduce additional branches to perform the classification at the different hierarchy levels and leverage on hierarchy-penalty maps to discourage incoherent hierarchical transitions within the classification. In this way, we improve the discrimination capabilities of classes at different levels of semantic details and train a modular architecture that can be used as a backbone network for introducing new specific classes and additional tasks considering limited training samples available. We exploit the class-hierarchy labels to train efficiently the different layers of the architecture, allowing the first layers to train faster on the first levels of the hierarchy modeling general classes (i.e., the macro-classes) and the intermediate classes, while using the last ones to discriminate more specific classes (i.e., the micro-classes). In this way, the targets are constrained in following the hierarchy defined, improving the classification of classes at the most detailed level. The proposed modular network has intrinsic adaptation capability that can be obtained through fine tuning. The experimental results, obtained on two tiles of the Amazonian Forest on 12 monthly composites of Sentinel-2 images acquired during 2019, demonstrate the effectiveness of the hierarchical approach in both generalizing over different hierarchical levels and learning discriminant features for an accurate classification at the micro-class level on a new target area, with a better representation of the minoritarian classes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel deep learning approach which performs building semantic segmentation of large-scale textured 3D meshes, followed by a polygonal extraction of footprints and heights. Extracting accurate individual building structures poses a challenge due to the complexity and the variety of architecture and urban designs, where a single overhead image is not enough. Integrating elevation data from a 3D mesh allows to better distinguish individual buildings in three-dimensional space. Another advantage is to avoid occlusion issues in the case of oblique imagery, where tall buildings mask smaller buildings behind them in the case of non-nadir images (especially problematic in urban areas). The proposed method transforms the input data from a 3D textured mesh to a true orthorectified RGB image by rendering both the color information and the depth information from a virtual camera looking straight down. Depth information is then converted to a normalized DSM (nDSM) by subtracting the Copernicus GDEM v3 30-meter Digital Elevation Model (DEM). Viewing the 3D textured mesh as a four-band raster image (RGB + nDSM) allows us to use a very efficient fully convolutional neural network based on the U-net architecture for processing large-scale areas. The proposed method was evaluated on three urban areas in Brazil, America, and France. It allows a fourfold improvement in productivity for cartography of buildings in complex urban areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Land cover classification products such as the Corine Land Cover (CLC) project provide Europe-wide maps at a resolution that is relatively coarse in comparison to many remote sensing instruments. Features such as roads, individual buildings, small rivers, and many others are missed due to this. In this work, we present a method to increase the resolution of land cover maps using a combination of the existing CLC product, and high resolution optical time series. The RapidAI4EO corpus is an open-data collection of half a million time series over Europe, and includes a 3 m/pixel resolution Planet Fusion product, spanning two years (2018-2019). A standard supervised learning approach with this data, using the CLC labels as direct ground-truth over each pixel, would lead to many incorrect labels, because of the inexact delineation of classes at the comparatively lower resolution of the CLC map. With this in mind, we have developed a land cover classification model which is trained using a novel loss function—ambiguous cross-entropy—that takes into account the fuzzy nature of the CLC labels. The ambiguous cross-entropy loss allows the model to learn from imprecise labels. Models are trained for each of the three CLC class levels, and compared. Statistical metrics for agreement between CLC, the trained models, and a high-resolution land cover map derived from OpenStreetMap are measured in a set of validation sites across Europe. This work demonstrates how machine learning can enhance an existing product’s resolution, without the need for time-consuming labeling at such a fine scale.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Environmental monitoring has been receiving increasing interest in recent years, both in research and in industries such as the military field. In the CBRNe event (Chemical, Biological, Radiological, Nuclear and Explosive), detection and monitoring of the target area are generally accomplished with manned devices. Physical exploration of the environment represents an unsafe situation whereas localization and mapping are time-consuming activity that involves some hazard level for the operator in the field. In case of accidental or deliberate release of chemical agents in the environment, the use of low-cost gas sensors developed in a network or mobile platform equipped with portable and reliable sensors provides the ability to acquire data on the event more quickly and safely with respect to manned devices. Localizing the source of a release and mapping its dispersion in the environment are crucial tasks for risk mitigation, even though they remain open problems. The rise of data processing techniques in the last few years such as Artificial Intelligence and Machine Learning methodologies gives the opportunity to develop promising solutions for environmental monitoring. In this work, we propose the application of Artificial Intelligence techniques for the chemical dispersion reconstruction for the data of a distributed sensor network by involving Deep Learning algorithms. The data was generated from a simulation of a gas dispersion in the environment and a reconstruction of the shape of the dispersion at the same resolution of the reference data has been obtained through a modified Deconvolution Neural Network.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The quantity and the quality of the training labels are central problems in high-resolution land-cover mapping with machine-learning-based solutions. In this context, weak labels can be gathered in large quantities by leveraging on existing low-resolution or obsolete products. In this paper, we address the problem of training land-cover classifiers using high-resolution imagery (e.g., Sentinel-2) and weak low-resolution reference data (e.g., MODIS-derived land-cover maps). Inspired by recent works in Deep Multiple Instance Learning (DMIL), we propose a method that trains pixel-level multi-class classifiers and predicts low-resolution labels (i.e., patch-level classification), where the actual high-resolution labels are learned implicitly without direct supervision. This is achieved with flexible pooling layers that are able to link the semantics of the pixels in the high-resolution imagery to the low-resolution reference labels. Then, the Multiple Instance Learning (MIL) problem is re-framed in a multi-class and in a multi-label setting. In the former, the low-resolution annotation represents the majority of the pixels in the patch. In the latter, the annotation only provides us information on the presence of one of the land-cover classes in the patch and thus multiple labels can be considered valid for a patch at a time, whereas the low-resolution labels provide us only one label. Therefore, the classifier is trained with a Positive-Unlabeled Learning (PUL) strategy. Experimental results on the 2020 IEEE GRSS Data Fusion Contest dataset show the effectiveness of the proposed framework compared to standard training strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performing object detection and recognition at the imaging sensor level, raises many technical and scientific challenges. Today’s state-of-the-art detection performances are obtained with deep Convolutional Neural Network (CNN) models. However, reaching the expected CNN behavior in terms of sensitivity and specificity require to master the training dataset. We explore in this paper a fast and automated method to acquire images of vehicles in infrared and visible range employing a commercial inspection drone equipped with thermal and visible range cameras, associated to a dedicated data-augmentation method for automated generation of context-specific machine learning datasets. The purpose is to successfully train a CNN to recognize the vehicles in realistic outdoor situations in infrared or visible range images, while reducing mandatory access to the vehicles of interest and the needs of complex and long outdoor image acquisition. First results demonstrate the feasibility of our approach for training a deep neural network-based object detector for vehicle detection and recognition applications in aerial images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the context of remote sensing classification, the performance of data-driven models in identifying ground objects is influenced by the variability of conditions during the acquisition process. This includes atmospheric conditions which strongly influence the radiance value collected by the hyperspectral camera and can hinder the generalization performance of the algorithms. Moreover, due to the difficulty of obtaining pixel-level annotations, hyperspectral models are typically trained on a limited quantity of data. Although these models may perform well on small validation datasets, their performance may not be adequate for real-world applications of hyperspectral imagery, which entail a wide range of conditions. This paper proposes an augmentation strategy to increase the diversity of data and enhance the robustness of the model under different atmospheric conditions. To achieve this, a physics-based radiative transfer model is used to first correct the atmospheric effects and then simulate new data under different atmospheric conditions. This step increases the diversity of the data by generating a wide range of conditions that the models may encounter in real-world applications. The method employs a 3D convolution-based model that extracts both spatial and spectral features for small houses detection on a proprietary dataset. The study results demonstrate the efficacy of the proposed method, as the augmented model outperforms the baseline model in terms of F1 score on the augmented test images and shows comparable performances on the original scenario.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The oil extraction process has cumulative detrimental impacts on the environment. However, in the process of oil mining, a large number of petroleum-based pollutants cause severe effect to soil and groundwater, which poses a serious risk to the ecological environment and human health. Understanding the distribution of oil well sites, is of vital importance to sustainable mining development. Efficient mapping these sites require automated identification and extraction of the oil well sites from satellite images. With the development of remote sensing satellite technology and the wide application of deep learning-based algorithms, it has become possible to automatically extract oil well sites from remote sensing images. However, there is lack of usage of Sentinel-2 satellite data to explore the efficacy in oil well sites detection. Therefore, we conducted this work to explore the feasibility of detecting the oil well sites with semantic segmentation from Sentinel-2 imagery. In this work, we established the Northeast Petroleum University Oil Well Sites Version 2.0 (NEPU-OWS V2.0) with spatial coverage spanning the Austin region of United States. We then validate the usability and effectiveness of the dataset using semantic segmentation models based on DANet and Swin-Unet, which are more capable of recognizing small targets. Our experimental results show that both models have great potential for remote sensing detection in the medium sized oil well sites and the Swin-Unet model achieved a better performance for the detection of oil well sites with a MIoU of 77.53%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restricted by the limited accessible data resources and the high cost of frame-by-frame labeling, fully supervised object detection is difficult to meet the needs of satellite video applications. In this paper, we propose a weakly-supervised satellite video detector based on salient feature fusion and boundary noise exploitation that enables moving ship detection without relying on object instance labeling. To mitigate pseudo-motion disturbances such as background movement, waves, and illumination changes, we first construct salient fusion features and then use Gaussian background modeling to generate high-quality pseudo-labels. To fully exploit the boundary information of noisy masks in pseudo label, we improve the Mask R-CNN by designing a noise-tolerant branch that fuses low-resolution features to mitigate the interference of inaccurate mask boundaries and guiding the network to learn boundary-related region features through boundary preserving mapping to enable better alignment of the predicted mask with the actual object. The experimental evaluation results show that the proposed method significantly outperforms other weakly supervised moving object detection methods and achieves comparable performance to fully supervised method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The availability of high-resolution, open, and free satellite data has facilitated the production of global Land-Use-Land-Cover (LULC) maps, which are extremely important to monitor the Earth’s surface constantly. However, generating these maps demands significant efforts in collecting a vast amount of data to train the classifier and to assess their accuracy. Although in-situ surveys are generally regarded as reliable sources of information, it is important to note that there may be inconsistencies between the in-situ data and the information derived from satellite data. This can be attributed to various factors (1) differences in viewpoint perspectives, i.e., aerial versus ground views, and (2) spatial resolution of the satellite images versus the extent of the Land-Cover (LC) present in the scene. The aim of this paper is to explore the feasibility of using geo-referenced street-level imagery to bridge the gap between information provided by field surveys and satellite data. Unlike conventional in-situ surveys that typically provide geo-tagged location-specific information on LULC, street-level images offer a richer semantic context for the sampling point under examination. This allows for (1) an improved interpretation of LC characteristics, and (2) a stronger correlation with satellite data. The experimental analysis was conducted considering the 2018 Land Use and Coverage Area Frame Survey (LUCAS) in-situ data, the LUCAS landscape (street-level) images and three high-resolution thematic products derived from satellite data, namely, Google’s Dynamic World, ESA’s World Cover, and Esri’s Land Cover maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a system for linking Earth Observation with open Web data, into a Linked Open Data architecture. The architecture has two components, one for extracting signals from Earth Observation data, and another for harvesting web sources. Both are linked with spatial objects. Web scraped data, either from APIs or crowd-sourced websites are geo-referenced and thematically annotated with standard vocabularies. The architecture has been demonstrated in two case studies, one for building permits and another for crowd-sourced observations of invasive aquatic plants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Efficient crop management strategies and optimized agricultural practices are pivotal in maximizing overall yield of crops. An essential aspect of improving crop yield is tracking the phenological development of crops, which plays a crucial role in carrying out timely crop management activities, including irrigation, fertilization, pest control, and harvest. However, the lack of resources to acquire data for phenological detection in under-developed countries and the influence of climatic factors on phenology, pose significant challenges. Our work proposes a cost-effective methodology that harnesses the power of Earth Observation (EO) data to acquire essential ground data without the need to rely on manual collection. With a focus on South Asia region, we delve into the analysis of EO data for monitoring wheat phenology and its dynamic interactions with climatic factors. The study focuses on five wheat phenological stages (stem elongation, heading, medium milk, hard dough, and harvest) from 2020 to 2022. Breakpoint and extrema analysis following curve-fitting of Normalized Differential Vegetation Index (NDVI) from Sentinel-2 data accurately detects heading, medium milk, hard dough, and harvest with a one-to-three-day average difference for both years. Stem elongation is detected with a seven-day difference in 2021 and 2022. Furthermore, our analysis reveals that a significant temperature surge in 2022, coupled with minimal precipitation, causes an earlier maturation of the crop compared to in 2021. We thoroughly investigate this effect for 2021 and 2022 to assess the impact of the rate of change in weather conditions on wheat phenology. Embracing these findings can foster sustainable and productive agricultural practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the diffusion of advanced image editing software, image manipulation is becoming an impelling aspect also for satellite images. In a Copy-Move (CM) forgery, part of the image is copied and pasted elsewhere into the same image. In the satellite domain, CM can be performed with the intent of propagating misleading information on the geography and morphology of the landscapes pictured in the images. The best algorithms for CM detection rely on a multi-step procedure involving extraction of image descriptors (keypoints), keypoint matching and finally clustering, for the localization of the forged area. The large size of many satellite images and their richness of details, often prevent the adoption of off-the-shelf tools developed for multimedia images. Due to the large number of keypoints typically present in satellite images, in fact, the computational complexity and memory requirements for SIFT keypoints extraction, matching, clustering and forgery localisation is prohibitive. In this paper, we propose a CM detection algorithm that can successfully process very high resolution satellite images, where off-the-shelf alternatives are crashing due to system memory exhaustion. The proposed algorithm is based on three main strategies powered by GPU acceleration: i) multi-threaded tile-based SIFT keypoints extraction, ii) optimised batch-based descriptors matching, iii) clustering and localisation of manipulated pixels exploiting tensors instead of a sliding window approach. Experiments carried out on images belonging to the ESA WorldView-2 European Cities dataset and on a set of hand-made copy-move forgeries with resolution above one Gigapixel, show the good performance of the proposed algorithm in terms of processing time and memory consumption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, deep learning-based methods have been exploited to learn complex features from Satellite Image Time Series (SITS) with superior spatial, spectral, and temporal resolution for the Land Cover Transition (LCT) analysis. However, in order to efficiently utilize High Resolution (HR) SITS for detecting LCTs, there is a need to tackle challenges related to a proper modelling of the LC behavior and pertain to the intricacy of the temporally dense SITS. A novel LCT detection approach is presented that exploits a pretrained Three Dimensional (3D) Convolutional Neural Network (CNN) to simultaneously extract spatio-temporal information from multi-annual SITS to identify the LCTs. To highlight the changed pixels, a multi-feature hyper temporal difference feature vector is generated that properly provides intrinsic information of the LC trends in space and time. To distinguish different LCTs between two consecutive years for the changed pixels, a clustering process is performed that considers the temporal information of the difference hyper features to discriminate and understand the LCTs. The product is a map indicating the location of changed pixels and providing information about the type of LCTs. The preliminary analysis has been done over a region in Sahel – Africa with images acquired between 2015 and 2016. The proposed approach has been compared with another LCT detection approach using 2D CNN. Experimental results confirm the effectiveness of the proposed approach in detecting the LCTs.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The convergence of microwave technology and machine learning has fostered the development of smart radar systems. This paper introduces an energy-efficient radar system for object classification, employing a microcomputer mediated interaction between radar and deep learning model. Specifically, our approach focuses on Ground-Based Synthetic Aperture Radar (GBSAR) and utilizes a Raspberry Pi microcomputer to dynamically adjust the number of positions from which GBSAR sensor obtains measurements. The system operates in two phases: initially recording the scene from reduced number of positions, followed by capturing segments of the scene that contain objects classified below a preset certainty from additional positions. Experimental findings highlight consistent improvements in classification accuracy across all test scenarios. This methodology enhances both energy efficiency and classification outcomes, effectively balancing resource consumption and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Interferometric phase unwrapping is one of the most challenging research topics for the remote sensing community. Recovering and correctly estimating the true interferometric phase signal from the received wrapped one provides critical information about changes in the Earth’s surface over time. Interferometric synthetic aperture radar (InSAR) has been widely used to extract such displacement estimates. However, InSAR images are affected often by a particular type of noise known as Gaussian. The presence of Gaussian noise in InSAR data can make the phase unwrapping process more difficult. In this paper, we introduce a convolutional deep learning-based network to perform simultaneous interferometric phase denoising and unwrapping. Quantitative and qualitative evaluations, made on synthetic and real world InSAR data, show that the proposed approach is able to produce accurate results even in the presence of strong noise.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maritime ship tracking is a crucial component of maritime surveillance, holding paramount importance in both military and civilian spheres. This study proposes a maritime ship tracking concept utilizing Synthetic Aperture Radar (SAR) constellations, along with a Detection-Matching-Tracking (DMT) implementation strategy. Specifically, we design a novel SAR ship detector capable of locating and segmenting all ships present within image sequences provided by an SAR constellation. Following ship detection, we employ an enhanced two-channel convolutional neural network (2-channel CNN) to perform ship matching between the target ship and potential candidates. Ultimately, based on the matching results, we can plot the space-time trajectory of the tracked ship. The preliminary experiment demonstrates that the proposed methodology is feasible and has the potential to track ships in open seas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Plant Species Richness (PSR) is one of the most widely used metrics to estimate alpha diversity in ecology. Several approaches have been developed to estimate PSR with Remote Sensing (RS) data. Among them, the Spectral Diversity Hypothesis (SDH) approach can be successfully applied to airborne hyperspectral data. Although effective, these data are limited in space and time due to high aerial acquisition costs. Satellite multispectral data are continuously acquired on a global scale, but their spatial and spectral resolutions are not comparable to those of hyperspectral data. Although some studies compared different optical data for estimating PSR using SDH, the impact of the spatial and spectral resolutions on the assessment of this biodiversity indicator is not clear. Moreover, most of the studies focus on dense tropical forest areas or wetlands, while little has been done to test the SDH approach in open forests located in Mediterranean regions. For all these reasons, the present work aims to: (1) apply and interpret PSR estimated with the SDH approach in open Mediterranean forest, and (2) evaluate the impact of the spatial and spectral resolutions on PSR estimation using real and simulated RS data. The PSR was estimated applying the SDH approach on a 4m hyperspectral data (373 bands), 30m multispectral satellite data (7 bands), synthetic 16m and 30m hyperspectral data (373 bands), and synthetic 4m multispectral data (7 bands). Preliminary results carried out in the San Joaquin Experimental Range (SJER) indicate that: (1) there is a weak correlation between spectral and species diversity in the less dense forest areas (R2 =-0.13 for the hyperspectral and R2 =0.14 for the multispectral data), while revealing good correlation in the more dense forest areas (R2 =0.68 for the hyperspectral and R2 =0.65 for the multispectral data), (2) the number of identified spectral species is more influenced by the spectral resolution than the spatial one, and (3) high spatial resolution data tends to overestimate the PSR in less dense forest areas because of the influence of background and understory vegetation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging is part of a growing remote sensing industry used in various applications like food or agriculture industries. Labeling hyperspectral data cubes is a resource and time intensive task. In order to try and speed up the labeling procedure, we propose a semi-supervised machine learning methodology to improve labeling speed at a cost of computational resources. An experiment was created to test the viability of this methodology. Gathered results show low hyperspectral label prediction (classification) accuracy using simple and fast neural networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, natural disasters have caused serious damage. In particular, landslides caused by earthquakes are damaging. However, it is difficult to predict when and where natural disasters will occur. Therefore, this study was conducted on early detection of landslides. SAR (Synthetic Aperture Radar) is a remote sensing technology. It uses microwaves and can observe day and night in all weather conditions. But this SAR data is a grayscale image, which is difficult to analyze without specialized knowledge. Therefore, we decided to use machine learning to detect changes in disasters that appear in SAR data. There are two machine learning models called pix2pix and pix2pixHD for image transformation. The objective of this study is to detect changes of surface by transforming pseudo-optical images from SAR data using machine learning. Two machine learning models were used for training, with test images and actual disaster data input. Simple terrain, such as forests only, was highly accurate, but complex terrain was difficult to generate. About actual disaster data, something like disaster-induced changes appeared in the converted images. However, we found it difficult to distinguish bare area from grassland in the output images. In the future, it is necessary to consider the combination of data to be used for learning.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a TensorFlow implementation of the RX-algorithm for anomaly detection in multi-spectral and hyperspectral imagery. In this paper, we perform a runtime performance comparison of the algorithm, implemented using: NumPy, SciPy and TensorFlow libraries on a CPU, a GPU (where applicable) as well as on edge hardware (Jetson TX2). The RX detection algorithm makes use of either local or global background statistics, such as the mean and covariance, to find anomalous pixels. In the approach examined here, the statistics are estimated using local background samples from the area neighboring the pixel under test. Such algorithms are typically implemented in Python using the NumPy library for numerical operations; however, a preliminary literature review found no formal investigations have been made into the suitability of alternative frameworks to optimize the performance on edge hardware. Our TensorFlow (and SciPy) implementations involve the use of a convolutional operations to calculate the required statistics. The use of these libraries significantly reduces the algorithm’s run time. We evaluate the implementation using a range of hardware, in order to get a diverse set of results and to highlight the differences in run times on each. We also show a comparative set of implementations of a Matched Filter algorithm for target detection. This algorithm uses a very similar approach to the RX algorithm but is provided with a template target spectrum to detect within the image. Notable improvements (approximately 98% reduction in run time) in performance can be seen through the use of a TensorFlow implementation on GPU. Results are demonstrated by trialing on multispectral imagery for ship detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Monitoring through satellite data, in situ (including spectrometer data, GPS, thermal camera) , open data, data from various devices and Unmanned Aerial Vehicles (UAV) in the selected anthropogenic sites is of extremely high ecological importance for tracking natural processes, the consequences of climate changes and the creation of a useful model for the analysis of spectral characteristics based on machine learning. The timeliness of the data and the spatial extent of the observed objects allow satellite information to be reliable in monitoring and making predictions about the risk and potential risk of natural disasters, rise of average air temperatures and anthropogenic pollution. The sites were pre-marked based on open data from Non-Governmental Organizations (NGOs) and administration. Data from the Multispectral Instrument (MSI) of the Sentinel 2 platform and SAR of the European Space Agency's Copernicus program, spectrometer (380 nm to 780 nm) and drone data were used. Landsat sensors and data from Sentinel 3 (EUMETSAT) were used to calculate the surface temperature of renewable energy sites such as photovoltaic parks. Data from different years were used in order to track the studied territories according to NUTS2. The result is the development of a useful hybrid model for spectral analysis and tracking of spatial dynamics and surface changes of objects of interest based on satellite and field surveys. Data from the ground mobile and autonomous weather station AWG 1, powered by an environmentally friendly magnesium-air battery was improved specifically for the project. Another important task is the creation of an energy atlas for the benefit of the Earth's Digital Twins. The data is part of an open data catalog of the NGO Eco Global Monitoring TA2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Petroleum and gas pipelines, comprising petroleum and gas pipes and related components, play an irreplaceable role in petroleum and gas transportation. For global economic growth, petroleum and gas are crucial natural resources. However, the pipelines often cross permafrost regions with challenging working conditions. Additionally, the potential for natural disasters raises concerns about pipeline accidents, posing a threat to pipeline operational safety. In response to the complexity of pipeline supervision and management, we choose to use remote sensing method combining deep learning-based algorithms. In this work, we build a petroleum and gas pipes dataset, which includes 1,388 remote sensing images and the study area is Russian polar regions. We trained FCN and U-Net deep learning models by using our self-built dataset for the detection of petroleum and gas pipes. Models’ performances were evaluated using MIoU (Mean Intersection over Union), mean precision, mean recall to evaluate the accuracy of the model’s prediction results and compared them visually with ground truth. Our results find that deep learning models can effectively learn the characteristics of pipelines and achieve ideal detection results on our dataset. The MIoU of the FCN model achieved 0.885 and the U-Net model achieved 0.894. The results demonstrate that our trained models can be used to accurately identify the petroleum and gas pipelines in remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the sharp increase of images on satellites, the efficiency of satellite-to-ground data transmission has become a bottleneck that restricts the effectiveness of remote sensing satellites. To alleviate the pressure of data transmission, we conducted in-depth research on remote sensing satellite image compression technology. Traditional methods and existing deep learning methods are prone to losing detailed information when dealing with remote sensing satellite images with complex textures and rich details. Given that Generative Adversarial Networks (GAN) have advantages in texture generation and detail restoration, we propose a remote sensing satellite image compression method based on conditional GAN. Our main innovations are: 1) proposing a compression framework for remote sensing satellite images based on conditional GAN, which improves the reconstruction quality through adversarial learning between the conditional generator and discriminator. 2) introducing the Laplacian of Gaussian loss to train the model, which emphasizes details such as edges, contours, and textures in remote sensing images. 3) introducing multiple perceptual metrics to calculate the similarity between images, which comprehensively evaluates the quality of reconstructed images. Experimental results show that our method has better visual effects and objective evaluation indicators than traditional methods and existing deep learning methods at the same compression ratio.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Uttarakhand state of India is highly prone to landslides causing valuable loss of life and money frequently and hence its careful study is of utmost importance. Recently several types of machine learning algorithms have been developed and applied for producing landslide susceptibility maps in various world regions. In this study landslide susceptibility assessment was undertaken in landslide prone areas of Uttarakhand state (India) applying three Machine learning algorithms (a) Support Vector Machines (SVM), (b) Logistic Regression (LR), and (c) Multilayer Perceptron (MLP). The comparative performance of these methods has been evaluated using various statistical index-based methods. In the development of the model, several important landslides affecting factors related to geomorphology, geology, and geo-environment such as slope angles, elevation, slope aspects, curvatures, rainfall, distance to faults, distance to roads, distance to rivers, land use, and land cover, DSM, DTM, etc. have been identified and their relative importance has been explored. Models developed are trained for various locations spanning the major landslide locations of Uttarakhand including Sonprayag, Sitapur, Rampur, Kalimah, Madhya Maheshwar, Chamoli and Uttarkashi. Analysis of the results reveals that all the above-mentioned landslide models performed well for landslide susceptibility assessments. Further, it has been observed that the deep learning-based MLP multilayer perceptron model performs better than SVM and LR - models owing to its higher number of hidden layers. Several hyperparameter tuning studies have also been conducted to finetune the model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Any high spatial resolution space-borne electro-optical sensing system operating in long wavelengths, like Earth-observation facilities operating in the longwave infrared are subjected to an inherent design and implementation challenge of deploying large monolithic primary aperture mirrors, to achieve a ground resolution distance of a few tens of cm. To outflank this issue, many present-date missions design and commission lightweight segmented mirrors, mostly with equal sized sub-apertures. One step ahead, these sub-apertures could be of particular non-uniform size distributions (One-by-Three, Taylor-ln and Taylor-invtan), thereby ensuring a smaller and even lighter primary and with marginal compromise in imaging quality due to significant sidelobe suppression. This is also confirmed by the fact that these particular non-uniform sized mirrors have very less loss of spatial frequencies with respect to that of equal-sized segmented mirrors. Therefore, under lossless conditions, there is hardly any degradation in imaging performance of these two configurations. However, in the presence of gaussian, impulse and shot noise, the situation worsens because of the compromised collecting area as well as noise contribution. A simple deconvolution technique for image restoration in presence of noise is no longer possible because of the lack of convergence. This calls upon for the use of iterative reconstruction algorithms with denoisers like Total Variation (TV), Block Matching and 3D Filtering (BM3D) or Convolutional Neural Networks (CNN) in the post-processing step to ensure better output images with high Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) along with good edge and texture preservation of the features. A comparison of these three kinds of denoisers, TV, BM3D and DnCNN implemented as a part of the Alternating Directions Method of Multipliers (ADMM) reconstruction technique is presented in this work. It is seen that in presence of some shot noise, random gaussian noise with σ= 0.03 and some impulse noise, the best performance is achieved for ADMM-BM3D technique with comparable performance from the ADMM-DnCNN method (except for Taylor-ln design). On the contrary, denoising with TV can perform well only in presence of shot noise. Additionally, this technique is nearly rejected for use in case of the Taylor-invtan model because of extremely low SSIM when all three noise types are incorporated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.