Deformable image registration is a basic image processing task, especially widely used in medical image processing and analysis. Different from rigid registration, its purpose is to find the optimal nonlinear transformation between two images and establish corresponding relationship, so as to achieve image consistency. In recent years, deformable registration methods based on deep learning have been studied a lot. Compared with traditional methods, they show great advantages in registration performance. This paper proposes an attention-based residual neural network for deformable image registration, which utilizes the U-Net encoder-decoder structure to design a convolutional neural network to predict the deformation field, and uses the residual module and attention mechanism enhances the ability of the model to extract features, and finally uses the spatial transformation function to obtain the registered image, and the entire network is trained in an unsupervised manner. We conducted experiments on the MNIST dataset and 2D brain magnetic resonance images (MRI) respectively. The experimental results show that the deformable registration network proposed in this paper has good performance and shows good results in registration accuracy.
Although the deep learning-based stereo matching networks have made significant progress, the ability to find corresponding relationships in ill-conditioned regions (weak textures, repeated textures, occluded regions, etc.) still needs to be improved. Aiming at the above problems, an hourglass stereo matching network is proposed, which is mainly composed of an hourglass structure. The first is the hourglass feature extraction module, which uses the information of the global context to obtain more detailed features; then the obtained feature information is aggregated together to construct a cost volume. In the three-dimensional convolution module, multiple hourglass structures are used to refine disparity and use intermediate supervision to standardize the cost volume. It fuses the information again and makes better use of global context information. Finally obtain the disparity map through disparity regression. Through the verification test on the Scene Flow and KITTI dataset, it shows that the proposed method maintains better performance while reducing parameters, significantly reduces the error in ill-conditioned regions, and achieves competitive results.
Although the deep learning-based stereo matching networks have made significant progress, the ability to find corresponding relationships in ill-conditioned regions (weak textures, repeated textures, occluded regions, etc.) still needs to be improved. Aiming at the above problems, an hourglass stereo matching network is proposed, which is mainly composed of an hourglass structure. The first is the hourglass feature extraction module, which uses the information of the global context to obtain more detailed features; then the obtained feature information is aggregated together to construct a cost volume. In the three-dimensional convolution module, multiple hourglass structures are used to refine disparity and use intermediate supervision to standardize the cost volume. It fuses the information again and makes better use of global context information. Finally obtain the disparity map through disparity regression. Through the verification test on the Scene Flow and KITTI dataset, it shows that the proposed method maintains better performance while reducing parameters, significantly reduces the error in ill-conditioned regions, and achieves competitive results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.