Aiming at the non-uniformity response problems of large array gazing remote sensing cameras due to factors such as production process and working mode, we proposed a non-uniformity correction method based on an improved total variation model. First, an improved total variation model with norm fidelity is constructed based on the distortion model of the imaging system. Next, the improved total variation model is iteratively solved by the Split Bregman splitting method. Then, in the iterative optimization process of the total variation model, the wavelet coefficients of the non-uniformity correction results of the improved total variation model are reconstructed using wavelet transform, and the high-frequency wavelet coefficients affecting the image quality are processed. Finally, the optimal iteration result obtained after wavelet reconstruction is the final correction result, and the non-uniformity correction of the remote sensing image is completed. The experimental results show that the non-uniformity correction effect of the proposed method is superior to other comparison methods. After correction, the non-uniformity coefficient of the image is reduced by more than three times, and the signal-to-noise ratio of the image is increased by more than 20 times, which effectively eliminates the non-uniformity response in the image and preserves the detail information of the image.
This paper proposes a new method for detecting small infrared targets, which addresses the issue of low detection probability (DP) and high false alarm probability (FAP) caused by false alarm sources such as high bright background edge or independent noise. The method employs a three-layer window for local contrast calculation to obtain a more accurate reference value of the background, which can enhance real targets and suppress complex backgrounds. It also solves the problems of multi-scale target detection and independent noise removal by using rank order filtering of fixed center window. Furthermore, targets are enhanced using the gray scale distributions of their edges contrast calculation, thereby improving the DP and reducing the FAP. Experimental validation on several infrared sequences and images confirms the effectiveness and robustness of the proposed method, which outperforms five existing algorithms in terms of DP and FAP.
Aiming at the detection difficulties in camouflage target detection, such as the high similarity between the target and its background, serious damage to the edge, and strong concealment of the target, a camouflage target detection algorithm YOLO of camouflage object detection based on strong semantic information and feature fusion is proposed. First, the attention mechanism convolutional block attention module (CBAM) is constructed to highlight the important channel features and target spatial locations to further aggregate rich semantic information from the high-level feature map. Then the atrous spatial pyramid pooling module is constructed to repeatedly sample the multiscale feature maps to expand the receptive field of the neural network, reduce feature sparsity in the process of convolution, and ensure dense features and multiscale contextual semantic information enter the feature fusion module. Finally, the attention skip-connections are constructed based on the CBAM module for fusing the original feature maps extracted by the backbone network to the corresponding detection outputs so as to eliminate the redundant features as well as enrich the target information of the network outputs. In order to fully verify the performance of the proposed algorithm, a camouflage target detection dataset named strong camouflage efficiency target dataset (SCETD) is constructed. Experimental results on SCETD show that the precision and recall of the proposed algorithm achieve 96.1% and 87.1%, respectively. The AP0.5 and AP0.5 : 0.95 achieve 92.3% and 54.4%, respectively. The experimental results prove the effectiveness of the proposed method in detecting camouflage targets.
Fusing the pertinence of natural scene statistics-based methods and the ubiquity of convolutional neural network-based methods, a no-reference image quality assessment (IQA) method fusing deep learning and statistical visual features for no-reference image quality assessment (FDSVIQA) is proposed. For the statistical visual features, a local normalized luminance map and a local normalized local binary pattern (LBP) map of the image are constructed, and the local normalized luminance features and the gradient-weighted local normalized LBP features are extracted on the two maps, respectively. These two kinds of features are concatenated to build the image statistical visual features. For deep learning, the local normalized luminance block and the localized normalized LBP block are input into a double-path deep learning network, and the statistical visual features are input into the deep learning network to be integrated with the depth features. After learning and training, IQA is achieved. The performance of the proposed FDSVIQA algorithm is tested on the Laboratory for Image and Video Engineering (LIVE), LIVE Multiply Distorted Image Quality Database, and Multiply Distortion Optics Remote Sensing Image databases. Experimental results show that the FDSVIQA algorithm has excellent subjective and objective consistency and good robustness for both distorted natural images and distorted remote sensing images. In addition, the FDSVIQA has database independence.
Considering the relatively poor real-time performance when extracting transform-domain image features and the insufficiency of spatial domain features extraction, a no-reference remote sensing image quality assessment method based on gradient-weighted spatial natural scene statistics is proposed. A 36-dimensional image feature vector is constructed by extracting the local normalized luminance features and the gradient-weighted local binary pattern features of local normalized luminance map in three scales. First, a support vector machine classifier is obtained by learning the relationship between image features and distortion types. Then based on the support vector machine classifier, the support vector regression scorer is obtained by learning the relationship between image features and image quality scores. A series of comparative experiments were carried out in the optics remote sensing image database, the LIVE database, the LIVEMD database, and the TID2013 database, respectively. Experimental results show the high accuracy of distinguishing distortion types, the high consistency with subjective scores, and the high robustness of the method for remote sensing images. In addition, experiments also show the independence for the database and the relatively high operation efficiency of this method.
Scattering phase function on horizontally oriented ice particles near the specular reflective direction is analytically modeled using a mixed method combining direct reflection and Fraunhofer diffraction components, where particles are simply treated as circular facets and the effect of fluttering is introduced under the assumption of Gauss distribution. The obtained model expression reveals that the essence of far-field scattering around specular direction is the diffraction pattern modulated by fluttered geometric reflection. Four groups of experiments are designed to validate this model at different wavelengths and incidence angles, and the calculated phase functions present good agreement both in distributions and peak values with that of T-matrix method in conjunction with a Monte Carlo stochastic process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.