Deep learning has been widely used in super-resolution reconstruction tasks in recent years. Most of the work is based on external examples, these methods have made great progress by training mapping functions from low-resolution (LR) image patches to high-resolution (HR) image patches compared with traditional methods. There are also a few methods focus on a single image and use the internal examples to get high-resolution images. The method based on prior knowledge obtains a large number of nonlinear mapping functions through complex convolution kernels, and significantly improves the reconstruction performance of the super-resolution task. However, these external example- based methods require a large number of patch pairs to train network parameters. Besides, most of the LR images are down-sampled from the ground truth images, not all the LR images in the real world come from the HR images, these images may be disturbed by noise, blurring and other factors, some LR images do not even have a corresponding ground truth image. These shortcomings make the training of the methods based on prior knowledge very time-consuming, and the reconstruction performance of specific images uncertain. Zero-short SR(ZSSR) firstly combines deep learning and internal examples together, and get satisfactory HR images at the test time. However, compared with the methods based on prior knowledge, ZSSR only uses the single image itself as the training dataset, directly learning the mapping functions between LR image patches and HR image patches does not fully display the self-similarity within the single image. In this paper, we further combine the internal mapping with deep-learning, learning internal mapping from different scales to get HR images with more fine details.
The booming development of deep convolutional neural networks recently has made super-resolution researches achieve great progress. However, most of the existing researches only consider the super-resolution of several integer scale factors. In this paper, we propose a dual-scale convolutional neural network (DSCNN) to solve super-resolution problem of arbitrary scale factor. The magnifying module of the proposed DSCNN is designed with two chains. First, the large- scale chain learns the feature mappings from the image blocks of the low resolution (LR) to those of the high resolution (HR). The LR image blocks are the magnified image blocks with the desired size via bicubic interpolation. Second, the small-scale chain learns the feature mappings from the down-sampling image blocks to the magnified image blocks. Compared to the existing SR networks, DSCNN has two advantages: (1) it can handle the super-resolution images with arbitrary scale factors, and (2) it dynamically predicts the values rather than the weights of the interpolated pixels of HR images. The extensive experiments on widely used benchmark datasets show the superiority of the proposed DSCNN to the state-of-the-art SR methods in terms of both numerical results and visual quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.