SPIE Journal Paper | 15 November 2023
KEYWORDS: Image fusion, Image classification, Vegetation, Hyperspectral imaging, Soil science, Sand, Land cover, Satellites, Sensors, Signal to noise ratio
Soil maps are essential sources for a diverse range of agricultural and environmental studies; hence, the detection of soil properties using remote sensing technology is a hot topic. Satellites carrying hyperspectral sensors provide possibilities for the estimation of soil properties. But, the main obstacle in soil classification with remote sensing methods is the vegetation that has a spectral signature that mixes with that of the soil. The objective of this study is to detect soil texture properties after eliminating the effects of vegetation using hyperspectral imaging data and reducing the noise by fusion. First, the endmembers common to all images and their abundances are determined. Then the endmembers are classified as stable ones (soil, rock, etc.) and unstable ones (green vegetation, dry vegetation, etc.). This method eliminates vegetation from the images with orthogonal subspace projection (OSP) and fuses multiple images with the weighted mean for a better signal-to-noise-ratio. Finally, the fused image is classified to obtain the soil maps. The method is tested on synthetic images and hyperion hyperspectral images of an area in Texas, United States. With three synthetic images, the individual classification results are 89.14%, 89.81%, and 93.79%. After OSP, the rates increase to 92.23%, 93.13%, and 95.38%, respectively, whereas it increases to 96.97% with fusion. With real images from the dates 22/06/2013, 25/09/2013, and 24/10/2013, the classification accuracies increase from 70.51%, 68.87%, and 63.18% to 71.96%, 71.78%, and 64.17%, respectively. Fusion provides a better improvement in classification with a 75.27% accuracy. The results for the analysis of the real images from 2016 yield similar improvements. The classification accuracies increase from 57.07%, 62.81%, and 63.80% to 58.99%, 63.93%, and 66.33%, respectively. Fusion also provides a better classification accuracy of 69.02% for this experiment. The results show that the method can improve the classification accuracy with the elimination of vegetation and with the fusion of multiple images. The approach is promising and can be applied to various other classification tasks.