A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.
As the important technology of remote sensing, surface target detection aims to obtain the information of the surface target, such as water, construction, vegetation and other interesting targets, through the remote sensing image processing and analysis. However, the pre-collection samples of some targets from the single source images are too few to meet the needs of automatic detection in the multi-scale remote sensing images, so target detection is still a challenge. Focus on the problem, a novel target detection method based on transfer learning using multiple sources for surface target in the remote sensing images is proposed. The most remarkable characteristic of transfer learning is that it can employ the knowledge in relative domains to help perform the learning tasks in the domain of the target. With the use of different sources of knowledge, transfer learning can transfer and share the information between similar domains. The proposed method locates the surface target area firstly, and then makes the target samples from different sources involved in learning. Therefore, the similar knowledge conductive to the target can be obtained. The prior knowledge from the multiple sources is transferred to the new target images for target detection. The experimental results show that effect of surface target detection by the proposed method from multiple sources is better than that from the single source, and the accuracy of detection has been greatly improved by the proposed method compared with the other previous methods. It demonstrates the advantage of our method in the multiple sources.
Automatic target detection in infrared images is a hot research field of national defense technology. We propose a new saliency-based infrared target detection model in this paper, which is based on the fact that human focus of attention is directed towards the relevant target to interpret the most promising information. For a given image, the convolution of the image log amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale is equivalent to an image saliency detector in the frequency domain. At the same time, orientation and shape features extracted are combined into a saliency map in the spatial domain. Our proposed model decides salient targets based on a final saliency map, which is generated by integration of the saliency maps in the frequency and spatial domain. At last, the size of each salient target is obtained by maximizing entropy of the final saliency map. Experimental results show that the proposed model can highlight both small and large salient regions in infrared image, as well as inhibit repeated distractors in cluttered image. In addition, its detecting efficiency has improved significantly.
Biometrics recognition aims to identify and predict new personal identities based on their existing knowledge. As the use
of multiple biometric traits of the individual may enables more information to be used for recognition, it has been proved
that multi-biometrics can produce higher accuracy than single biometrics. However, a common problem with traditional
machine learning is that the training and test data should be in the same feature space, and have the same underlying
distribution. If the distributions and features are different between training and future data, the model performance often
drops. In this paper, we propose a transfer learning method for face recognition on bimodal biometrics. The training and
test samples of bimodal biometric images are composed of the visible light face images and the infrared face images. Our
algorithm transfers the knowledge across feature spaces, relaxing the assumption of same feature space as well as same
underlying distribution by automatically learning a mapping between two different but somewhat similar face images.
According to the experiments in the face images, the results show that the accuracy of face recognition has been greatly
improved by the proposed method compared with the other previous methods. It demonstrates the effectiveness and
robustness of our method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.