Restoration and Enhancement of Underwater Images Based on Bright Channel Prior

This paper proposed a new method of underwater images restoration and enhancement which was inspired by the dark channel prior in image dehazing field. Firstly, we proposed the bright channel prior of underwater environment. By estimating and rectifying the bright channel image, estimating the atmospheric light, and estimating and refining the transmittance image, eventually underwater images were restored. Secondly, in order to rectify the color distortion, the restoration images were equalized by using the deduced histogram equalization. The experiment results showed that the proposed method could enhance the quality of underwater images effectively.


Instruction
For the past several years, the attention of more and more scholars was drawn to the field of underwater images enhancement and restoration.As a result of scattering and absorption, underwater images always suffer from the problems of low contrast, blur, and color distortion.So, underwater image restoration and enhancement has been a challenging field.Figure 1(a) shows some pictures captured in underwater environment and we can see the quality decline obviously.
High quality images are needed in many fields which use the underwater images to achieve some specific goals, such as the tracking of underwater objects, 3D reconstruction of underwater objects, underwater archaeology, underwater biological research, and sea floor exploration.
In order to obtain high quality images, scholars proposed different approaches which could be sorted into two categories.One is image restoration and the other one is image enhancement.The image enhancement technology does not consider the physics model and it can improve the image quality by image processing methods simply.The image restoration technology is based on the physics model of image formation.But this technology is not good at dealing with the color distortion.Because the two technologies have their own advantages and disadvantages, in this paper, we combined the two technologies and obtained satisfying results.
For the image dehazing questions in air, some scholars in [1,2] proposed the method which needed several pictures obtained under different weathers to get the image without fog.Recently more and more researchers have begun to focus on single image dehazing [3][4][5][6].Tan in [3] dehazed images by maximizing the local contrast of restoration images.The result of this method was satisfying, but the saturation suffered from over enhancement.Fattal in [4] used one single image to obtain a transmittance image and used this transmittance image to dehaze the image.He et al. in [5] proposed the dark channel prior to acquire the transmittance image.He found that one of the three channels (R/G/B) of images without fog and sky areas normally had low intensity.Once sky or foggy areas existed in images, this phenomenon would be invalid.He found this statistics phenomenon and then proposed the dark channel prior.Ge et al. in [6] proposed one single image dehazing method by linear transformation.Li et al. [7] put forward one single image dehazing method which utilized the detailed prior information.
For underwater environment, by observing the relationship between the blurring degree and the imaging distance, Peng et al. [8] applied the blurring method to the imaging formation model and estimated the distance between the scene and the camera and then removed the fog.Ancuti et al. [9] utilized the image fusion method to remove the fog; up to now this method may be the best method on visual perception.
Considering the features of underwater environment, Carlevaris-Bianco et al. in [10] used the notable differences of attenuation among different channels to estimate the depth of the scene.Galdran et al. in [11] put forward an automatic red channel underwater image restoration method, and this method could be regarded as the deformation of the dark channel prior method.Wen et al. in [12] used the blue and green channel without the red channel to redefine a new dark channel that fitted the underwater image.This slightly modified dark channel prior based method was successfully applied to underwater images.
In this paper, we proposed one new restoration and enhancement method for underwater images: underwater image restoration and enhancement based on bright channel prior.Our method could be regarded as the improved version of preciously reported dark channel prior.Another bright channel prior based method different from the method proposed in this manuscript has been reported in [13].In [13] the bright channel prior seems to be the opposite of the dark channel prior in [5].The flow chart of our method is shown in Figure 2. Experiment results showed that the proposed method was of validity to all images of different scenes, and in a certain degree our method could correct the color distortion.In our experiment, all parameters of different images were same.
The structure of this paper is as follows: in Section 2, the proposed image restoration and enhancement method is described, including bright channel image acquisition, the maximum color difference image acquisition, bright channel image correction, atmospheric light estimation, then initial transmittance image acquisition, image restoration, and deduced histogram equalization.Section 3 mainly analyzes the validness of the proposed method and some other contrast experiments.Section 4 is the conclusion.

Underwater Image
where  is the observed intensity, the input degraded color image;  is the transmission, which describes the amount of light without being scattered nor absorbed and reaches the observer.() =  −() ,  is the attenuation coefficient of the medium, and () is the distance between the object and the camera. is the atmospheric light, physically related to the color of the haze;  is the scene radiance or haze-free image.
As we can see, if we know  and , then we can solve .

Estimate the Transmittance Image through Dark Channel
Prior.By observing, He et al. in [5] found one of the three channels (R/G/B) of a local area in images without fog and sky areas had low intensity, which means the light intensity is a small numeric.Once sky or foggy areas exist in images, this phenomenon will be invalid.For one image the dark channel is defined as follows in [5]: where   means the three channels of each image, Ω() denotes a window block centered at pixel , and the dark channel theory is shown as follows: There are three factors to explain why the dark channel prior is valid: (a) shades of cars, buildings, trees, and other objects have low intensity; (b) objects with bright colors always have one low intensity channel; (c) the intensity of objects is low.In one word, shade and color of natural scenes are very normal; the dark channel images of these scenes will be very gloomy.
By using dark channel prior, the initial transmission can be solved by Figure 3 shows several degraded underwater images and their dark channel images.From Figure 3 we can see that the dark channel images of underwater images are not the same as the dark channel images in air.In air, the sky areas or the distant scenes always have bright dark channel, but this phenomenon is not valid for dark channel images of underwater images.So we can come to the conclusion that the dark channel prior failed to work for degraded underwater image.
The reason why the dark channel prior failed to work for degraded underwater image directly is as follows: when the weather is foggy, atmospheric particle size is larger than wavelengths of visible light, and scattering effects on visible lights with different wavelengths are same, so the image tends to be white or gray.Meanwhile, particle size in water is usually larger than the wavelengths of visible lights, so the absorption effect is more obvious than the scattering effect in water.The scattering effects of visible lights with different wavelengths are normally the same, but the absorption effects in air and water are different: the absorption effect in water is stronger than the absorption effect in air, and it will become more serious with the wavelength increasing in water.So the images captured in water are usually blue or green, and the red channel intensity will be low in the whole picture; this leads to the problem that the dark channel image of degraded underwater image will not change with the imaging distance and the transmittance image is not related to the dark channel image.In this situation, the dark channel prior has no sense anymore: with the image being degraded or not, there is almost always one color channel with low intensity (usually the red one) [11].On the contrary, the degraded image in air does not suffer from the problems of color distortion, so the dark channel prior works for degraded images in air.In one word, the dark channel prior fails to work in degraded underwater images due to the low intensity of the red channel. in Section 2.3.4);split different color channels and deform (1), we can obtain the following:

Image Restoration
where  R ,  G , and  B are the red, green, and blue channel images of the degraded underwater image, respectively. R ,  G , and  B are the red, green, and blue channel images of the nondegraded underwater image, respectively. R ,  G , and  B are the atmospheric light of the red, green, and blue channel images of the degraded underwater image, respectively.
Equation ( 5) is completely equal to (1).We combine the three channel images ( R , 1 −  G , and 1 −  B left term of ( 5)) as the new degraded image  new , and we call it half-revision image.Regard ( R , 1 −  G , 1 −  B ) as the new nondegraded image  new ; ( R , 1− G , 1− B ) is the new atmospheric light; then we can get the new imaging formation model: Considering different color channels we deform (6) into ∈ {,  new ,  new } denotes different color channels.Define the bright channel as follows: Figures 4(a), 4(b), 4(c), and 4(d) show that the bright channel images of nondegraded underwater images always have higher intensity, while in the degraded underwater images the bright channel intensity of the near scene is high.Otherwise, the bright channel intensity of the distant scene is low, especially when pure water areas exist in these scenes.So we suppose the bright channel intensity of underwater images without pure water areas and distant scenes approximate to 1.We call this bright channel prior Maximize both sides of (7) in a local block: Depending on ( 8) and ( 9) we have the following: max Bringing (11) to (10) we get If we have known  new , we can compute the initial transmission by using (12).By analyzing (12), we find the following:  new is a constant which is less than 1, 1/(1 −  new ) is a constant which is more than 1, and the relationship of t() and max ∈Ω() (max ∈{, new , new }  new ()) is a linear proportional relationship.

Generate the Maximum Color Difference Image.
We know the red light attenuates fastest while the green and blue light attenuate slower, so the color distortion will become more serious with the distance increasing.We define the maximum color difference image as follows: bgsubr () = 1 − max (max ( max () −  min () , 0) , where bgsubr is the maximum color difference image,  is each pixel,  max is the channel whose intensity is the maximum among the three channels,  mid is the channel whose intensity is medium among the three channels, and  min is the channel whose intensity is the minimum among the three channels; max operation means to choose the maximum one from all candidates.Figure 5 shows the maximum color difference images by using ( 13); we can see that the further the imaging distance, the more obvious the difference between different channels.The value of the maximum color difference image is inversely proportional to the imaging distance.

Rectify the Bright Channel
Image.From Section 2.3.1 we know the transmittance image is linearly proportional to the bright channel image, and from Section 2.3.2 we know the value of the maximum color difference image is inversely proportional to the imaging distance.Analyzing (12), we find that the transmission we get will be smaller than the real transmission because the bright channel prior assumed that the bright channel of nondegraded underwater image is approximate to 1.In order to increase the stability, we rectify the bright channel image using the maximum color difference image.The rectifying equation is light correct () =  * light () + (1 − ) * bgsubr () , (14) where light correct () denotes the rectified bright channel image, light () = max ∈Ω() (max ∈{, new , new }  new ()) (in Section 2.3.1)denotes the nonrectified bright channel image of the degraded underwater image, bgsubr () denotes the maximum color difference image, and  is the proportional coefficient.In our experiments, we found the bright channel should be the main part to produce the rectified bright channel image, and  should be larger than 0.5; at the same time, we find that  in (15) could satisfy this requirement, so  is captured as follows: where  is the saturation channel image of the degraded underwater image in HSV color space.The first max operation means we pick out the maximum value of each column for the saturation image, and the second max operation means we compute the maximum value for these maximum values picked out. Figure 6 shows the restoration images with the nonrectified and rectified bright channel images, respectively.In Figure 6(b) the image is overrestored especially in the red rectangle.We can see the rectification of the bright channel image can restrain the overrestoration.

Estimate the Atmospheric Light.
In previous sections, we assume the atmospheric light is known, but the value of atmospheric light is also estimated by us.In Section 2.3.1, we obtained the bright channel image of the degraded underwater image.In this section we will use the bright  channel image of the degraded underwater image to estimate the atmospheric light.Firstly, we use the gray image of the original degraded underwater image to produce the variance image (V).For each pixel in the gray image we compute its variance within a block which centers at this pixel point.The variance of each pixel in one block shows the evenness of this block.
Secondly, we pick out the top one percent darkest pixels in the bright channel.These pixels are usually most hazeopaque.Among these pixels, the pixel with the lowest value in the variance image V is selected as the atmospheric light.These pixels are in the red rectangle in Figure 7.

Compute and Refine Transmittance Image.
After obtaining the rectified bright channel image and the atmospheric light, we can compute the initial transmittance image of each color channel.The computing equation is where  denotes the different color channels, light correct denotes the rectified bright channel image, and   denotes the atmospheric light of each channel.After computing the transmittance image of each channel, we can solve the average value of the three transmittance images and regard it as the initial transmittance image.Figure 8(a) shows the initial transmission we got.The main problems are some halos and block artifacts, the same as the transmittance image captured in [5].So we use the gray image of original degraded underwater image as the guide image and the initial transmittance image as the input image to perform guide filter in [15].Then, we obtain the final transmittance image.Figure 8(b) shows the refined transmittance image.

2.3.6.
Restore and Enhance Underwater Image.After obtaining the transmittance image and the atmospheric light, we can obtain the restoration image: where  max is the channel whose mean intensity is the maximum among the three channels,  mid is the channel whose mean intensity is the medium among the three channels, and  min is the channel whose mean intensity is the minimum among the three channels.t is the transmittance image. is the degraded underwater image,  is the atmospheric light, and  is the restored image.Equation (17) demonstrates that if the intensity value of one pixel in one of the three color channels is different from the atmospheric light in this channel, the difference will become larger.This can increase the contrast of the image, but it can also bring some questions.In the maximum color channel, if the intensity of one point is larger than the atmospheric light value, the intensity of this point will become much larger (this makes the color distortion problem more serious).In the minimum color channel, if the intensity value of one pixel point is smaller than the atmospheric light value, the intensity of this point will become much smaller (this leads to the losing of some detailed information in low intensity district).So the pixel points whose intensity values are smaller than the atmospheric light are computed by using (17) in the maximum color channel.The pixel point whose intensity value is larger than the atmospheric light is computed by using (17) in the minimum color channel.Figure 9(a) shows the four restoration images whose degraded images have been introduced in previous sections.
Estimating the different transmittance images of different channels precisely is a challenging question because it is so difficult for us to estimate the precise transmittance () for different channels.So many methods used the same transmittance image, which were not good at rectifying the color distortion.In this paper we use the deduced histogram equalization method to rectify the color distortion.We do not equalize the image from 0 to 255; we equalize the restored image from 0 to a specific value.Firstly, we compute the average intensity value of each channel; then we multiply the three means with three coefficients (1, 2, 3 in Figure 10(d); the three coefficients may be the same or not).Next, we compare the three products with 255, respectively, and choose the smaller one as the specific value.Figures 9(b) and 10 show the result of the histogram equalization method on the restoration images.We can see that the effect of histogram equalization on rectifying the color distortion is obvious.Figure 10(d) is the flow chart of the deduced histogram equalization method.

Experiment Result.
There are varieties of different scenes for underwater images, so it is difficult for us to test all scenes.It is very difficult to assess the performance of an underwater image restoration algorithm, since there is no ground truth or uniform measure standard available.To compare different methods we pick out four underwater images from the website "https://github.com/agaldran/UnderWater."It is known that underwater images always appear blue or green; these four images represent four different scenes of underwater.The four images are shown in Figure 11.
The color shift changes from blue to green gradually from Figures 11(a)-11(d).The image of wreck shifts blue the most seriously among the four images; the image of diver shifts green the most seriously among the four images.Figures 12,15,18, and 21 are the visual results of the algorithms in [5,[9][10][11]16].In order to compare these algorithms, we pick out two image features which are regarded as the standard of different algorithms.One is the amount of canny edge point; the other is the amount of sift feature point.Figures 13, 16, 19, and 22 are the results of canny edge point feature of the algorithms in [5,[9][10][11]16].Figures 14,17,20,and 23 are the results of sift point feature of the algorithms in [5,[9][10][11]16].The amounts of canny edge point are stored in Table 1, and the amounts of sift feature point are stored in Table 2. Figures 24 and 25 are the bar charts of the canny edge point amount and sift feature point amount, respectively.
The algorithms in Tables 1 and 2 show that they can improve the quality of degraded underwater images [9,11] and the proposed algorithms in this paper are better than the algorithms in [5,10,16] in increasing the visual perception.The algorithm in [5] could increase the sift feature point amount of the four images, but it can not increase the canny edge point amount and it is not obvious in improving the     visual effect.Ref. [16] is the best in increasing the sift feature point quantity, but it can not increase the canny edge point quantity obviously, and the visual effect is not true compared to the original image.Ref. [10] can increase both feature point amounts, but the visual effect is not obvious.Ref. [11] can improve the visual effect obviously but is not superior to the proposed method in increasing both feature point amounts.Considering all these factors [9] and the proposed algorithm have the best performance in improving the quality of underwater images.Ref. [9] is better than our algorithm in dealing with the green shifting image, but the proposed algorithm is better than [9] in dealing with the blue shifting image.Considering the experiment result we can come to the conclusion that the proposed method can enhance the quality of underwater images effectively.

Conclusion
In this paper, a new method of the restoration and enhancement of underwater images was proposed.Our algorithm was inspired by the dark channel prior image dehazing.
(a) [5] (b) [9] (c) [16] (d) [10] (e) [11] (f) Proposed   Firstly, we proposed the bright channel prior of underwater environment.By estimating and rectifying the bright channel image, estimating the atmospheric light, and estimating and refining the transmittance image, finally the underwater images were restored.Secondly, in order to rectify the color distortion further, we utilized the deduced histogram equalization to equalize the restoration images.
We carried out our experiments on four different underwater images which represent four different scenes of underwater environment.We compared our algorithm with another five algorithms by using the quantities of two feature points.The experiment results showed that the proposed algorithm was effective in improving the quality of underwater degraded images.
There are still some questions in our method and a lot of work we should do in the future.
(1) The transmission computed by the bright channel prior is smaller than the real transmission.This always leads to the overrestoration question; we used

Figure 1 :Figure 2 :
Figure 1: (a) Degraded underwater image; (b) the improved result of our method.

Figure 3 :
Figure 3: (a) Degraded underwater images; (b) dark channels of the degraded underwater images.

Figure 4 :
Figure 4: (a) Clear underwater images; (b) the bright channel of clearly underwater images; (c) degraded underwater images; (d) the bright channels of degraded underwater images.

Figure 5 :
Figure 5: The maximum color difference images.

Figure 6 :
Figure 6: (a) The restored result with the rectification of the bright channel; (b) the restored result without the rectification of the bright channel.

Figure 7 :
Figure 7: The estimated atmospheric light point is the white point in the red rectangle.

Figure 8 :Figure 9 :
Figure 8: (a) The initial transmittance image; (b) the final transmittance image after guided image filter.

Figure 10 :
Figure 10: (a) The original degraded underwater image; (b) the result with the bright channel restoration; (c) the result with the bright channel restoration and histogram equalization; (d) the flow chart of the deduced histogram equalization method.

Figure 12 :
Figure 12: The visual results of different algorithms.

Figure 13 :
Figure 13: The canny edge results of different algorithms.

Figure 19 :
Figure 18: The visual results of different

Figure 20 :
Figure 20: The sift feature results of different algorithms.

Figure 21 :
Figure 21: The visual results of different algorithms.