|
1.INTRODUCTIONFor underwater operations, underwater image enhancement has a promising application. However, the image quality is severely degraded in the underwater environment, resulting in increased difficulty in underwater target detection and recognition1. With the advent of high-definition underwater camera equipment, the quality of images obtained from the underwater environment is getting higher and higher. However, due to the complex underwater conditions, there are still problems such as low overall image contrast, color recession and blurred details, so there is still a need to enhance underwater images2, 3. There have been many methods with good performance in underwater image enhancement algorithm research. The method proposed by Zeba Patel et al.4 to eliminate the blue-green color of the image due to atmospheric light attenuation, the overall color distribution of the image was first adjusted using the characteristics of the LAB color space, followed by sharpening the underwater images to enhance distorted edges during color balance. Ancuti C et al.5 used the principle of multi-scale fusion based image processing to increase the visibility of various underwater videos and images. The method proposed by Yu H et al.6 firstly used homomorphic filtering to remove color deviations, followed by calculating the difference between light and dark channels using dual transmission maps, and then final processing of the image using dual image Wavelet Fusion to produce the results. This paper proposes a multi-input fusion method for underwater image processing, using an improved Dark channel prior (DCP) and CLAHE algorithm, while introducing Homomorphic filtering (HF) algorithm, aiming to improve the contrast of underwater images, ensure the information integrity of the images, and significantly improve the quality of the images, to achieve the enhancement of underwater images and provide a basis for subsequent underwater related work. 2.RELATED JOBSFor an image, the contrast in its different areas may vary greatly If a single histogram is used to adjust it is obviously not the best choice7. To solve this problem Adaptive Histogram Equalization (AHE) was proposed. Since the AHE method sometimes amplifies some noise, Professor Zuiderveld introduced CLAHE8, which uses setting contrast thresholds to remove the effect of noise on images. Also to improve the computational speed as well as to remove the block edge transition imbalance effect caused by the chunking process, a bi-linear interpolation method was used on top of this. Since both underwater optical images and images taken on foggy days have reduced contrast and visibility due to medium scattering, the imaging models of both are similar9. Therefore, image defogging can theoretically be used to remove background scattering from underwater images10. The basic principle of Dark Channel Prior (DCP) is mainly derived from Kaiming He’s CVPR paper11. DCP is outstanding in the field of image defogging, and its process is to first obtain the dark channel map and then use the soft matting method to refine the obtained coarse transmission is refined using soft matting. Homomorphic filtering uses the image illumination information and reflectance model in the frequency domain, and using compression of the luminance range and contrast enhancement to improve the image quality. In this paper, the application of high-pass filter in the homomorphic filtering algorithm removes the image illumination unevenness problem, improves the image visibility, and lays the foundation for the next step of image enhancement. 3.PROPOSED ALGORITHMThis paper presents a multi-input fusion underwater image enhancement algorithm that aims to improve image contrast, restore color information, and reduce chromatic aberration while enhancing image details. In this algorithm, the underwater image is first channel-separated, and the image is defogged using the DCP algorithm, and the minimum value of the pixels in the three RGB channels is stored in a grayscale map of the same size as the original image, and then this grayscale map is minimally filtered. Then the defogged result is converted from the RGB color space to the linearly transformed LAB color space. In the LAB color space, the chromaticity information and the luminance information are independent. Next, the luminance information is used in CLAHE to enhance the contrast while retaining the chromaticity information. Finally, the result is then transferred to RGB color space. Meanwhile, homomorphic filtering is used to process the defogged image to solve the problem of uneven image illumination. Finally, the RGB enhanced image is fused under the adaptive Euclidean norm. The block diagram of the algorithm is shown in Figure 1. In the final step of the fusion process, the Euclidean rule is used to fuse the images in RGB color space12, with the following equation. where δ is the fusion coefficient in the range of [0.5,0.95], RH, GH and BH are the three channel values of the image after HF processing, and RL, GL and BL are the three channel values after CLAHE processing. The fusion coefficient δ is chosen to ensure that the mean value of each channel of the fused image is in the range of [128-5, 128+5], and when δ increased, the image becomes brighter and brighter13. 4.EXPERIMENT4.1Quantitative metricsPeak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), and Image entropy14 are used as the criteria for judging the quality of images. PSNR is a widely used metric for objective evaluation of images, which assesses the error between the corresponding pixel points, as the visual sensory characteristics of the human eye are not fully considered, since the perception of a region in the perceptual mode of the human eye is often influenced by its adjacent regions, so there will often be inconsistencies between the evaluation result and the subjective human perception15. The PSNR is calculated as follows. Where MSE represents the mean square error and is calculated as follows, M and N are the width and height of the image, respectively, i and j are the positions of the image pixel points, f(i, j) is the gray value of the original image, and g(i, j) is the gray value of the improved image, and a larger value of PSNR indicates less image distortion14. 4.2SimulationIn this paper, eight underwater images are selected, which include blue images due to the absorption of red light by the water body, and green images caused by some plankton, as shown in Figure 2. Firstly, this paper uses the improved DCP algorithm to defog the underwater images, and it can be seen from Figure 2b that the improved DCP algorithm has better defogging effect on green water bodies. Then Figures 2c and 2d are the operations on the defogged images, respectively. Figure 2c is to transfer the result of Figure 2b from RGB color space to LAB color space first, and to perform CLAHE operation on the component of L channel to improve the contrast of the image, Figure 2d is to perform HF operation on the defogged graph in Figure 2b to improve the problem of uneven illumination of the image. Finally, Figure 2e is the result of fusing graph in Figures 2c and 2d using the Euclidean law. In addition, the present method is compared with the algorithms proposed in several typical current papers, as shown in Figure 3. Figure 3b shows the method proposed by Ancuti C et al.5. for multi-scale fusion. Figure 3c shows a haze removal algorithm proposed by Carlevaris-Bianco N16. Figure 3d shows a novel haze removal enhancement algorithm proposed by Chiang J Y17. Figure 3e shows a novel preprocessing filter automation algorithm proposed by Bazeille S18, which reduces underwater perturbations and improves image quality. Figure 3f shows the algorithm proposed in this paper. 4.3ResultsThe results in Figure 3 show that the method proposed by Bazeille S and Chiang J Y is less effective in enhancing the green water images, and Ancuti C’s method can remove the blue-green color of the images well and retain the details. Although Bazeille’s method can also remove the blue-green color of underwater images well, the images are oversaturated and cause image color distortion. The enhancement method proposed in this paper can effectively remove blue-green illumination. Image processing algorithms DCP, HF and CLAHE have complementary relationships in color recovery, image filtering, the enhancement method in this paper presents image details while improving image brightness, and further improves the image contrast and color brightness. Table 1 shows the various quantitative metrics of these methods for the four images processed in Figure 3. One more problem with the existing commonly used objective evaluation methods is that they do not fully consider the characteristics of the human eye visual system, which is very unreasonable for images with the human eye as the final fiducial19. Therefore, this paper also combines the underwater image quality evaluation criteria (UCIQE) proposed by Yang et al.20. and the underwater image quality evaluation criteria (UIQM) proposed by Panetta et al.21. Quantitative evaluation shows that the image enhancement method proposed in this paper yields better entropy, PSNR and MSE values, and most of the metrics are indistinguishable from classical algorithms. Some images have obvious advantages in their metrics such as PSNR, UCIQE, and UIQM after processing, indicating that the enhanced contrast and color brightness are richer compared with other algorithms. Table 1.The five quantitative metrics of the image with different methods.
5.CONCLUSIONThe algorithm in this paper shows limitations in processing images of very deep scenes taken with artificial light, in which the blue appearance remains even though some enhancement can be obtained. In addition, very distant parts of the scene cannot be reliably recovered when the illumination is poor. The recovery of distant objects and regions is also a limitation of the method in this paper. The method in this paper can effectively improve the contrast of the image and ensure the color of the image with bare eye quality improvement due to the improvement of the algorithm while defogging. The algorithm in this paper further lays the foundation for underwater target identification and marine resource exploration. ACKNOWLEDGEMENTThis paper is funded by the program of institution-ground cooperation project in Zhoushan city Dinghai District (2021C31004), is also by 2022 Zhejiang University Students’ scientific and technological innovation activity plan and new talent plan (2022R411A032,2022R411A034). REFERENCESAncuti, C. O., Ancuti, C., De Vleeschouwer, C. and Bekaert, P.,
“Color balance and fusion for underwater image enhancement,”
IEEE Transactions on Image Processing, 27
(1), 379
–393
(2018). https://doi.org/10.1109/TIP.83 Google Scholar
Marini, S., Fanelli, E., Sbragagli, V., Azzurro, E., Del Rio Fernandez, J. D. R. and Aguzzi, J.,
“Tracking fish abundance by underwater image recognition,”
Scientific Reports, 8
(1), 13748
(2018). https://doi.org/10.1038/s41598-018-32089-8 Google Scholar
Ji, J., Li, Y. and Li, Y.,
“Current trends and prospects of underwater image processing,”
in International Symposium on Artificial Intelligence and Robotics,
223
–228
(2017). Google Scholar
Patel, Z., Desai, C., Tabib, R. A., Bhat, M., Patil, U. and Mudengudi, U.,
“Framework for underwater image enhancement,”
Procedia Computer Science, 171 491
–497
(2020). https://doi.org/10.1016/j.procs.2020.04.052 Google Scholar
Ancuti, C., Ancuti, C. O., Haber, T. and Bekaert, P.,
“Enhancing underwater images and videos by fusion,”
in IEEE Conference on Computer Vision and Pattern Recognition,
81
–88
(2012). Google Scholar
Yu, H., Li, X., Lou, Q., Lei, C. and Liu, Z.,
“Underwater image enhancement based on DCP and depth transmission map,”
Multimedia Tools and Applications, 79
(27-28), 20373
–20390
(2020). https://doi.org/10.1007/s11042-020-08701-3 Google Scholar
Zhang, L., Pan, Y. and Zhang, X.,
“Improved method for image enhancement based on histogram equalization,”
Electronics World,
(17), 99
–100
(2013). Google Scholar
Zuiderveld, K.,
“Contrast limited adaptive histogram equalization,”
Elsevier, 474
–485
(1994). Google Scholar
Wang, R.,
“The research of single image recovery in fog and underwater,”
(2014). Google Scholar
Yang, A., Deng J., Wang, J. and He, Y.,
“Underwater image restoration based on color cast removal and dark channel priori,”
Journal of Electronics & Information Technology, 37
(11), 2541
–2547
(2015). Google Scholar
He, K., Sun, J. and Tang, X.,
“Single image haze removal using dark channel prior,”
IEEE Transactions on Pattern Analysis and Machine Intelligence, 33
(12), 2341
–2353
(2011). https://doi.org/10.1109/TPAMI.2010.168 Google Scholar
Xue, W. and Mou, X.,
“Image quality assessment with mean squared error in a log based perceptual response domain,”
in 2014 IEEE China Summit & International Conference on Signal and Information Processing,
315
–319
(2014). Google Scholar
Ma, J., Fan, X., Yang, S., Zhang, X. and Zhu, X.,
“Contrast limited adaptive histogram equalization-based fusion in YIQ and HSI color spaces for underwater image enhancement,”
International Journal of Pattern Recognition and Artificial Intelligence, 32
(07), 1854018
(2018). https://doi.org/10.1142/S0218001418540186 Google Scholar
Hore, A. and Ziou, D.,
“Image quality metrics: PSNR vs. SSIM,”
in 2010 20th International Conference on Pattern Recognition,
2366
–2369
(2010). Google Scholar
Zhu, W., Wang, G., Pan, Z. and Hou, G.,
“Motion blurred image blind deconvolution based on multichannel nonlinear diffusion term,”
Laser & Optoelectronics Progress, 55
(7), 197
–205
(2018). Google Scholar
Carlevaris-Bianco, N., Mohan, A. and Eustice, R. M.,
“Initial results in underwater single image dehazing,”
2010 OCEANS MTS/IEEE SEATTLE, 1
–8
(2010). Google Scholar
Chiang, J. and Chen, Y.-C.,
“Underwater image enhancement by wavelength compensation and dehazing,”
IEEE Transactions on Image Processing, 21
(4), 1756
–1769
(2012). https://doi.org/10.1109/TIP.2011.2179666 Google Scholar
Bazeille, S., Quidu, I., Jaulin, L. and Malkasse, J. P.,
“Automatic underwater image pre-processing,”
in Proceedings of CMM’06,
(2006). Google Scholar
Di, H. and Liu, X.,
“Image fusion quality assessment based on structural similarity,”
Acta Photonica Sinica,
(5), 766
–771
(2006). Google Scholar
Yang, M. and Sowmya, A.,
“An underwater color image quality evaluation metric,”
IEEE Transactions on Image Processing, 24
(12), 6062
–6071
(2015). https://doi.org/10.1109/TIP.2015.2491020 Google Scholar
Panetta, K., Gao, C. and Agaian, S.,
“Human-Visual-System-Inspired underwater image quality measures,”
IEEE Journal of Oceanic Engineering, 41
(3), 541
–551
(2016). https://doi.org/10.1109/JOE.2015.2469915 Google Scholar
|