Classification and identification of the materials the earth’s surface have long been a fundamental but challenging research topic in geoscience and remote sensing (RS). Although deep learning technology has achieved certain results in remote sensing image classification, it still has certain challenges for multi-modality remote sensing data classification. In this work, we propose a deep learning multi-modality remote sensing image classification network based on pixel-level cross fusion, called DMLCF-Net. In other words, DMLCF-Net provides a unified deep learning multi-modality remote sensing image classification framework, and introduces the strategy of cross fusion to classify multi-modality remote sensing images. The cross-fusion module can obtain compact feature representation from multi-modality remote sensing data, and different modality can exchange information with each other effectively. In addition, to validate the proposed scheme, extensive experiments conducted on multi-modality remote sensing dataset, demonstrate the effectiveness and superiority of the proposed DMLCF-Net in comparison with several state-of-the-art multi-modality remote sensing data classification methods.
|