Hyperspectral images contain rich spectral–spatial information. Therefore exploring spectral–spatial classifiers has become a mainstream trend in the field of hyperspectral image classification. However, current studies seldom delve into the contribution of the extracted spectral and spatial features to the subsequent classification task. To further explore the contribution of spectral and spatial features, we propose a classification framework based on capsule network (CapsNet). The framework consists of two branches, which extract spectral and spatial features to form capsules respectively. Subsequently, the spectral capsules and spatial capsules are sent to the dynamic routing layer to generate higher-level spectral–spatial capsules. In addition, we set up a primary–secondary relationship for the two branches, which indirectly reflects the contribution made by lower capsules forming higher capsules in the feature fusion stage. Our experiments, conducted on three widely used hyperspectral datasets and two sampling strategies, demonstrate that our proposed model had 4.60% to 8.80% improvements in OA compared with the original CapsNet. Compared to the capsule-based improved model (iCapsNet and Conv-Caps), our model also has a 1.20% to 2.50% improvement. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
CITATIONS
Cited by 5 scholarly publications.
Hyperspectral imaging
Image classification
Data modeling
Performance modeling
Feature extraction
Statistical modeling
Principal component analysis