Image registration serves as a central pre-processing step in medical image processing and plays a crucial role in various applications such as medical image analysis, diagnostics and surgical navigation. Existing deformable registration methods have mainly focused on the efficient extraction of features from two images, coupled with the automatic derivation of deformation fields. However, these methods tend to overlook the inherent low-level nature of deformable registration, which is considered unnecessary for feature extraction operations. Instead, deformable registration tasks should place more emphasis on detecting dissimilarities between the two images, thereby obtaining a more accurate deformation field. In this work, we present an unsupervised deformable image registration network called Cross-Matching-based Deformable Image Registration Network (CMBMorph). We introduce a cross-matching-based mechanism designed to facilitate the learning of feature disparities between the two images. By focusing attention on the non-aligned regions of the two images, our model achieves high-quality medical image registration results. We then evaluate our proposed method on publicly available datasets. Compared to baseline models and state-of-the-art techniques, our method achieves a Dice accuracy of 0.812, establishing a new state-of-the-art performance level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.