In this paper, an infrared and color image fusion algorithm based on guided filtering was proposed . The intensity and chromaticity component of the color image in a color space, e.g., HSV, was extracted first. Secondly, in order to avoid edge blurring and reduce computing costs, guided filtering was used to decompose the intensity component of the infrared image and color image to obtain their base layer and detail layer, respectively. Then, the two base and two detail layers were fused using the proposed method separately. The clear and supplementary areas were distinguished by the sum of gradients, the initial base layer was obtained by preliminary fusion, and the intensity level of the fused base layer was adjusted similarly to that of the color image. The two detail layers were fused simply by selecting their maximum absolute value of the gradient map in each pixel, and the intensity component of the fused image was obtained by inverse transformation. Finally, the fused color image was reconstructed by merging the fused intensity component and original chromaticity components. Multiple sets of images were selected for testing the proposed algorithm, and some state-of-art algorithms in the experiment, and the results show that the proposed algorithm had good image visibility, stable color, and fast fusion speed.
KEYWORDS: Cameras, 3D modeling, RGB color model, Statistical modeling, 3D metrology, Visual process modeling, Stereo vision systems, Clouds, 3D vision, Light sources and illumination
Aiming at the problems of single camera color measuring system, a method of 3D object color measurement based on convergent binocular stereo vision is proposed. By taking a pair of 2D images, an 3D image is reconstructed in a 3D point cloud model, in which color of each point is restored by fusing colors of corresponding point of 2D images. Based on color charts with 240 and 24 colors, a distinctive 11-term polynomial is trained to convert colors from image RGB to CIELAB. An experiment was conducted to test the proposed method. The results show that the color prediction accuracy for the proposed model was good enough.
Video smoke detection benefits life safety and environment protection, its early warning is of great importance. In response of many disadvantages of traditional smoke detectors, a method of video smoke detection based on dynamic, color and texture features are proposed. Firstly, the motion area is extracted through improved Vibe algorithm. Then, the suspected area corresponding to smoke is identified in CIELAB color space and segmented using a color filtering method. Finally, uniform local binary mode and gray level cooccurrence matrix are extracted from the image within suspected area and used to form the input vector of machine learning classifier for recognizing smoke. The classifier is tested with 400 images, and the results show that the detection system based on random forest algorithm has better performance, the selected smoke features have high recognition accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.