Jie Yang, Chao Ren, Xin Zhou, Xiaohai He, Zhengyong Wang
Journal of Electronic Imaging, Vol. 28, Issue 06, 063001, (November 2019) https://doi.org/10.1117/1.JEI.28.6.063001
TOPICS: Remote sensing, Wavelets, Super resolution, Image processing, Convolution, Lawrencium, Image fusion, Convolutional neural networks, Image classification, Wavelet transforms
Super-resolution (SR), which aims at recovering a high-resolution (HR) image from single or sequential low-resolution (LR) images, is a widely used technology in image processing. In the field of image SR, convolutional neural networks (CNNs) have attracted increasing attention because of their high-quality performance. However, most CNN-based methods treat each channel-wise feature equally, which lacks discriminative learning ability across feature channels. Furthermore, many methods neglect to fully use information of each convolutional layer. To resolve these problems, we propose a remote sensing image SR method named dense channel attention network (DCAN). In our DCAN, a sequence of residual dense channel attention blocks (RDCABs) is cascaded with a densely connected structure. In each RDCAB, we make full use of the information from all convolution layers via densely connected convolutional layers. In addition, RDCABs utilize the channel attention mechanism to adaptively recalibrate channel-wise feature responses by explicitly modeling the interdependencies between the channels. In addition, our DCAN can make full use of the hierarchical features by densely connecting each RDCAB. Finally, to further improve the SR performance, the proposed DCAN is learned in both the pixel and wavelet domains, and a fusion layer is used to fuse the outputs of these two domains. Extensive quantitative and qualitative evaluations verify the superiority of our proposed method over several state-of-the-art methods.