Open Access Paper
28 December 2022 Multi-input fusion underwater image enhancement technology
Huandi Du, Xiaojun Wang, Lili Li, Peng Chen
Author Affiliations +
Proceedings Volume 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022); 125063T (2022) https://doi.org/10.1117/12.2662522
Event: International Conference on Computer Science and Communication Technology (ICCSCT 2022), 2022, Beijing, China
Abstract
Images are one of the important carriers for acquiring underwater information. This paper proposes an underwater image enhancement algorithm with multi-input fusion to address the problems of low image contrast and colour degradation caused by the underwater environment in the imaging process of underwater optical images. Firstly, the dark channel a priori defogging is used to remove the inhomogeneous turbidity in the image and equalise the image color; then, the restricted contrast adaptive histogram equalisation is used to uniformly clip the image brightness components to further improve the image contrast; at the same time, the homomorphic filtering is combined to solve the problems such as uneven illumination of the image. Finally, the multiple outputs obtained are weighted using the Euclidean rule to obtain the final fused image. Experiments show that the images enhanced by this method are clearer to the naked eye and the target information is more prominent.

1.

INTRODUCTION

For underwater operations, underwater image enhancement has a promising application. However, the image quality is severely degraded in the underwater environment, resulting in increased difficulty in underwater target detection and recognition1. With the advent of high-definition underwater camera equipment, the quality of images obtained from the underwater environment is getting higher and higher. However, due to the complex underwater conditions, there are still problems such as low overall image contrast, color recession and blurred details, so there is still a need to enhance underwater images2, 3.

There have been many methods with good performance in underwater image enhancement algorithm research. The method proposed by Zeba Patel et al.4 to eliminate the blue-green color of the image due to atmospheric light attenuation, the overall color distribution of the image was first adjusted using the characteristics of the LAB color space, followed by sharpening the underwater images to enhance distorted edges during color balance. Ancuti C et al.5 used the principle of multi-scale fusion based image processing to increase the visibility of various underwater videos and images. The method proposed by Yu H et al.6 firstly used homomorphic filtering to remove color deviations, followed by calculating the difference between light and dark channels using dual transmission maps, and then final processing of the image using dual image Wavelet Fusion to produce the results.

This paper proposes a multi-input fusion method for underwater image processing, using an improved Dark channel prior (DCP) and CLAHE algorithm, while introducing Homomorphic filtering (HF) algorithm, aiming to improve the contrast of underwater images, ensure the information integrity of the images, and significantly improve the quality of the images, to achieve the enhancement of underwater images and provide a basis for subsequent underwater related work.

2.

RELATED JOBS

For an image, the contrast in its different areas may vary greatly If a single histogram is used to adjust it is obviously not the best choice7. To solve this problem Adaptive Histogram Equalization (AHE) was proposed. Since the AHE method sometimes amplifies some noise, Professor Zuiderveld introduced CLAHE8, which uses setting contrast thresholds to remove the effect of noise on images. Also to improve the computational speed as well as to remove the block edge transition imbalance effect caused by the chunking process, a bi-linear interpolation method was used on top of this.

Since both underwater optical images and images taken on foggy days have reduced contrast and visibility due to medium scattering, the imaging models of both are similar9. Therefore, image defogging can theoretically be used to remove background scattering from underwater images10. The basic principle of Dark Channel Prior (DCP) is mainly derived from Kaiming He’s CVPR paper11. DCP is outstanding in the field of image defogging, and its process is to first obtain the dark channel map and then use the soft matting method to refine the obtained coarse transmission is refined using soft matting.

Homomorphic filtering uses the image illumination information and reflectance model in the frequency domain, and using compression of the luminance range and contrast enhancement to improve the image quality. In this paper, the application of high-pass filter in the homomorphic filtering algorithm removes the image illumination unevenness problem, improves the image visibility, and lays the foundation for the next step of image enhancement.

3.

PROPOSED ALGORITHM

This paper presents a multi-input fusion underwater image enhancement algorithm that aims to improve image contrast, restore color information, and reduce chromatic aberration while enhancing image details. In this algorithm, the underwater image is first channel-separated, and the image is defogged using the DCP algorithm, and the minimum value of the pixels in the three RGB channels is stored in a grayscale map of the same size as the original image, and then this grayscale map is minimally filtered. Then the defogged result is converted from the RGB color space to the linearly transformed LAB color space. In the LAB color space, the chromaticity information and the luminance information are independent. Next, the luminance information is used in CLAHE to enhance the contrast while retaining the chromaticity information. Finally, the result is then transferred to RGB color space. Meanwhile, homomorphic filtering is used to process the defogged image to solve the problem of uneven image illumination. Finally, the RGB enhanced image is fused under the adaptive Euclidean norm. The block diagram of the algorithm is shown in Figure 1.

Figure 1.

Image processing flow chart.

00139_PSISDG12506_125063T_page_2_1.jpg

In the final step of the fusion process, the Euclidean rule is used to fuse the images in RGB color space12, with the following equation.

00139_PSISDG12506_125063T_page_2_2.jpg

where δ is the fusion coefficient in the range of [0.5,0.95], RH, GH and BH are the three channel values of the image after HF processing, and RL, GL and BL are the three channel values after CLAHE processing. The fusion coefficient δ is chosen to ensure that the mean value of each channel of the fused image is in the range of [128-5, 128+5], and when δ increased, the image becomes brighter and brighter13.

4.

EXPERIMENT

4.1

Quantitative metrics

Peak Signal to Noise Ratio (PSNR), Mean Square Error (MSE), and Image entropy14 are used as the criteria for judging the quality of images. PSNR is a widely used metric for objective evaluation of images, which assesses the error between the corresponding pixel points, as the visual sensory characteristics of the human eye are not fully considered, since the perception of a region in the perceptual mode of the human eye is often influenced by its adjacent regions, so there will often be inconsistencies between the evaluation result and the subjective human perception15. The PSNR is calculated as follows.

00139_PSISDG12506_125063T_page_3_1.jpg

Where MSE represents the mean square error and is calculated as follows, M and N are the width and height of the image, respectively, i and j are the positions of the image pixel points, f(i, j) is the gray value of the original image, and g(i, j) is the gray value of the improved image, and a larger value of PSNR indicates less image distortion14.

00139_PSISDG12506_125063T_page_3_2.jpg

4.2

Simulation

In this paper, eight underwater images are selected, which include blue images due to the absorption of red light by the water body, and green images caused by some plankton, as shown in Figure 2. Firstly, this paper uses the improved DCP algorithm to defog the underwater images, and it can be seen from Figure 2b that the improved DCP algorithm has better defogging effect on green water bodies. Then Figures 2c and 2d are the operations on the defogged images, respectively. Figure 2c is to transfer the result of Figure 2b from RGB color space to LAB color space first, and to perform CLAHE operation on the component of L channel to improve the contrast of the image, Figure 2d is to perform HF operation on the defogged graph in Figure 2b to improve the problem of uneven illumination of the image. Finally, Figure 2e is the result of fusing graph in Figures 2c and 2d using the Euclidean law.

Figure 2.

Comparison of the results of the algorithm proposed in this paper with various mainstream processing methods.

00139_PSISDG12506_125063T_page_3_3.jpg00139_PSISDG12506_125063T_page_4_1.jpg

In addition, the present method is compared with the algorithms proposed in several typical current papers, as shown in Figure 3. Figure 3b shows the method proposed by Ancuti C et al.5. for multi-scale fusion. Figure 3c shows a haze removal algorithm proposed by Carlevaris-Bianco N16. Figure 3d shows a novel haze removal enhancement algorithm proposed by Chiang J Y17. Figure 3e shows a novel preprocessing filter automation algorithm proposed by Bazeille S18, which reduces underwater perturbations and improves image quality. Figure 3f shows the algorithm proposed in this paper.

Figure 3.

Comparison of the results of the algorithm proposed in this paper with various mainstream processing methods.

00139_PSISDG12506_125063T_page_4_2.jpg

4.3

Results

The results in Figure 3 show that the method proposed by Bazeille S and Chiang J Y is less effective in enhancing the green water images, and Ancuti C’s method can remove the blue-green color of the images well and retain the details. Although Bazeille’s method can also remove the blue-green color of underwater images well, the images are oversaturated and cause image color distortion. The enhancement method proposed in this paper can effectively remove blue-green illumination. Image processing algorithms DCP, HF and CLAHE have complementary relationships in color recovery, image filtering, the enhancement method in this paper presents image details while improving image brightness, and further improves the image contrast and color brightness. Table 1 shows the various quantitative metrics of these methods for the four images processed in Figure 3. One more problem with the existing commonly used objective evaluation methods is that they do not fully consider the characteristics of the human eye visual system, which is very unreasonable for images with the human eye as the final fiducial19. Therefore, this paper also combines the underwater image quality evaluation criteria (UCIQE) proposed by Yang et al.20. and the underwater image quality evaluation criteria (UIQM) proposed by Panetta et al.21. Quantitative evaluation shows that the image enhancement method proposed in this paper yields better entropy, PSNR and MSE values, and most of the metrics are indistinguishable from classical algorithms. Some images have obvious advantages in their metrics such as PSNR, UCIQE, and UIQM after processing, indicating that the enhanced contrast and color brightness are richer compared with other algorithms.

Table 1.

The five quantitative metrics of the image with different methods.

 ImageNumPSNRMSEEntropyUCIQEUIQM
Ancuti Ca27.5467114.39467.64750.59213.1840
b27.9996103.06487.71560.67202.9362
c28.543290.93927.38010.53672.8241
d28.517391.48407.13480.53883.0920
Carlevaris- Bianco Na29.775968.46757.04400.47382.4360
b32.666535.19056.49900.45152.7889
c28.0517101.83637.23000.59852.5152
d28.882784.10097.16600.54623.2809
Chiang J Ya29.138479.29346.88850.44661.5721
b29.474673.38676.63990.49222.5546
c28.341195.27247.54650.57942.4760
d28.460792.68307.18110.57922.8009
Bazeille Sa27.5184115.14177.18960.48433.0191
b28.0125102.76057.34750.56052.8458
c27.8513106.64657.36550.59633.0359
d28.0531101.80397.47570.60252.8411
Our Methoda29.031981.26156.83270.48802.2807
b28.289196.41867.43420.60703.1825
c28.878784.17857.52900.58302.5682
d29.051380.89857.13440.56422.8743

5.

CONCLUSION

The algorithm in this paper shows limitations in processing images of very deep scenes taken with artificial light, in which the blue appearance remains even though some enhancement can be obtained. In addition, very distant parts of the scene cannot be reliably recovered when the illumination is poor. The recovery of distant objects and regions is also a limitation of the method in this paper. The method in this paper can effectively improve the contrast of the image and ensure the color of the image with bare eye quality improvement due to the improvement of the algorithm while defogging. The algorithm in this paper further lays the foundation for underwater target identification and marine resource exploration.

ACKNOWLEDGEMENT

This paper is funded by the program of institution-ground cooperation project in Zhoushan city Dinghai District (2021C31004), is also by 2022 Zhejiang University Students’ scientific and technological innovation activity plan and new talent plan (2022R411A032,2022R411A034).

REFERENCES

[1] 

Ancuti, C. O., Ancuti, C., De Vleeschouwer, C. and Bekaert, P., “Color balance and fusion for underwater image enhancement,” IEEE Transactions on Image Processing, 27 (1), 379 –393 (2018). https://doi.org/10.1109/TIP.83 Google Scholar

[2] 

Marini, S., Fanelli, E., Sbragagli, V., Azzurro, E., Del Rio Fernandez, J. D. R. and Aguzzi, J., “Tracking fish abundance by underwater image recognition,” Scientific Reports, 8 (1), 13748 (2018). https://doi.org/10.1038/s41598-018-32089-8 Google Scholar

[3] 

Ji, J., Li, Y. and Li, Y., “Current trends and prospects of underwater image processing,” in International Symposium on Artificial Intelligence and Robotics, 223 –228 (2017). Google Scholar

[4] 

Patel, Z., Desai, C., Tabib, R. A., Bhat, M., Patil, U. and Mudengudi, U., “Framework for underwater image enhancement,” Procedia Computer Science, 171 491 –497 (2020). https://doi.org/10.1016/j.procs.2020.04.052 Google Scholar

[5] 

Ancuti, C., Ancuti, C. O., Haber, T. and Bekaert, P., “Enhancing underwater images and videos by fusion,” in IEEE Conference on Computer Vision and Pattern Recognition, 81 –88 (2012). Google Scholar

[6] 

Yu, H., Li, X., Lou, Q., Lei, C. and Liu, Z., “Underwater image enhancement based on DCP and depth transmission map,” Multimedia Tools and Applications, 79 (27-28), 20373 –20390 (2020). https://doi.org/10.1007/s11042-020-08701-3 Google Scholar

[7] 

Zhang, L., Pan, Y. and Zhang, X., “Improved method for image enhancement based on histogram equalization,” Electronics World, (17), 99 –100 (2013). Google Scholar

[8] 

Zuiderveld, K., “Contrast limited adaptive histogram equalization,” Elsevier, 474 –485 (1994). Google Scholar

[9] 

Wang, R., “The research of single image recovery in fog and underwater,” (2014). Google Scholar

[10] 

Yang, A., Deng J., Wang, J. and He, Y., “Underwater image restoration based on color cast removal and dark channel priori,” Journal of Electronics & Information Technology, 37 (11), 2541 –2547 (2015). Google Scholar

[11] 

He, K., Sun, J. and Tang, X., “Single image haze removal using dark channel prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (12), 2341 –2353 (2011). https://doi.org/10.1109/TPAMI.2010.168 Google Scholar

[12] 

Xue, W. and Mou, X., “Image quality assessment with mean squared error in a log based perceptual response domain,” in 2014 IEEE China Summit & International Conference on Signal and Information Processing, 315 –319 (2014). Google Scholar

[13] 

Ma, J., Fan, X., Yang, S., Zhang, X. and Zhu, X., “Contrast limited adaptive histogram equalization-based fusion in YIQ and HSI color spaces for underwater image enhancement,” International Journal of Pattern Recognition and Artificial Intelligence, 32 (07), 1854018 (2018). https://doi.org/10.1142/S0218001418540186 Google Scholar

[14] 

Hore, A. and Ziou, D., “Image quality metrics: PSNR vs. SSIM,” in 2010 20th International Conference on Pattern Recognition, 2366 –2369 (2010). Google Scholar

[15] 

Zhu, W., Wang, G., Pan, Z. and Hou, G., “Motion blurred image blind deconvolution based on multichannel nonlinear diffusion term,” Laser & Optoelectronics Progress, 55 (7), 197 –205 (2018). Google Scholar

[16] 

Carlevaris-Bianco, N., Mohan, A. and Eustice, R. M., “Initial results in underwater single image dehazing,” 2010 OCEANS MTS/IEEE SEATTLE, 1 –8 (2010). Google Scholar

[17] 

Chiang, J. and Chen, Y.-C., “Underwater image enhancement by wavelength compensation and dehazing,” IEEE Transactions on Image Processing, 21 (4), 1756 –1769 (2012). https://doi.org/10.1109/TIP.2011.2179666 Google Scholar

[18] 

Bazeille, S., Quidu, I., Jaulin, L. and Malkasse, J. P., “Automatic underwater image pre-processing,” in Proceedings of CMM’06, (2006). Google Scholar

[19] 

Di, H. and Liu, X., “Image fusion quality assessment based on structural similarity,” Acta Photonica Sinica, (5), 766 –771 (2006). Google Scholar

[20] 

Yang, M. and Sowmya, A., “An underwater color image quality evaluation metric,” IEEE Transactions on Image Processing, 24 (12), 6062 –6071 (2015). https://doi.org/10.1109/TIP.2015.2491020 Google Scholar

[21] 

Panetta, K., Gao, C. and Agaian, S., “Human-Visual-System-Inspired underwater image quality measures,” IEEE Journal of Oceanic Engineering, 41 (3), 541 –551 (2016). https://doi.org/10.1109/JOE.2015.2469915 Google Scholar
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Huandi Du, Xiaojun Wang, Lili Li, and Peng Chen "Multi-input fusion underwater image enhancement technology", Proc. SPIE 12506, Third International Conference on Computer Science and Communication Technology (ICCSCT 2022), 125063T (28 December 2022); https://doi.org/10.1117/12.2662522
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image enhancement

Image fusion

Image processing

Image quality

Image filtering

RGB color model

Eye

Back to Top