In recent years, multimodal composite guidance technology has received extensive research and exploration, and multimodal fusion methods based on deep learning have achieved performance far superior to traditional methods. However, in practical applications, there are still many difficulties in the fusion technology of infrared images and high-resolution range profiles. In response to the above challenges, this paper fully utilizes the characteristics of sensor data collection to conduct in-depth research on the application of position prior, cross modal attention mechanism, and domain adaptation in multimodal object detection problems. A multimodal object detection algorithm called CenterNet- PK, which introduces a position prior, is proposed to deeply fuse infrared radar feature information while alleviating data heterogeneity issues. A feature extraction module based on one-dimensional convolution and bidirectional gated cyclic unit (Bi-GRU) was designed, and a radar feature map generation algorithm based on position prior was proposed. A cascade strategy was used to deeply fuse infrared and radar feature maps. Finally, the improved adaptive key point loss function was used to complete network training. The experimental results show that the proposed algorithm has high detection accuracy and anti-interference ability.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.