With the development of sensor technologies, imaging technology is developing more rapidly. What followed was the widespread use of image processing technology in many kinds of applications. For instance, image processing technology has been widely used in video surveillance, medical diagnosis, remote sensing detection and object tracking. As a sub-field of image processing technology, image fusion is the one of most studied technology. The aim of image fusion is to acquire an integrated image that contains more information. This integrated image is more conductive for a human or a machine to understand and mine the information contained in the image. In all kinds of image fusion, infrared (IR) and visible (VIS) image fusion is one of the most valuable multisource image fusion. When imaging the same scene using both IR and VIS imaging system, more information can be obtained, but more redundant information is generated. The IR sensor acquires the thermal radiation information of the object in a scene, so the object can also be detected when the lighting conditions are poor. The image acquired by VIS light sensors has more spectral information, clearer texture details, and higher spatial resolution. Thus, the scene can be described more completely by integrating the IR and VIS images into one image. Meanwhile, the scene can be readily understood by observers, and the information of the scene can be easily perceived. In this paper, an effective IR and VIS image fusion via non-subsampled shearlet transform (NSST) and pulse-coupled neural network (PCNN) in multi-scale morphological gradient (MSMG) domain is proposed. First, low frequency sub-image and high frequency sub-images are obtained through NSST. Then, the low frequency sub-image and high frequency sub-images are fused via a MSMG domain PCNN (MSMG-PCNN) strategy. Finally, the fused image is reconstructed by inverse NSST. Experimental results demonstrate that the proposed MSMG-PCNN-NSST algorithm performs effectively in most cases by qualitative and quantitative evaluation.
Traditional methods usually show low detection performance, which is caused by the inaccurate evaluation of background statistical characteristics and the contamination of the anomaly. In response to these problems, this paper proposes a novel anomaly detection method based on collaborative representation of dictionary subspace. Since the prior information of the background and the anomaly is not known, the proposed method first utilizes mean shift method to cluster the original hyperspectral image (HSI). Then, some pixels in each category are selected as background dictionary atoms. Therefore, an representative overcomplete background dictionary can be obtained in this way. This dictionary can avoid the contamination of most anomaly pixels. Finally, collaborative representation is performed to detect anomaly targets by using the dictionary instead of the domain information of the pixel under test. It does not require modeling the background, which improves the accuracy of the calculation. Experimental results show that the proposed method outperform the other state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.