A Single Image Dehazing Method Using Average Saturation Prior

Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, themodel is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity.Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP), which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.


Introduction
Due to the atmospheric suspended particles (aerosols, water droplets, etc.) that absorb and scatter light before reaching a camera, outdoor images that are captured in bad weather (haze, fog, etc.) are significantly degraded and yield poor visibility, such as blurred scene content, reduced contrast, and faint surface color.The majority of applications in computer vision and computer graphics, such as motion estimation [1,2], satellite imaging [3,4], object recognition [5,6], and intelligent vehicles [7], are based on the assumption that the input images have clear visibility.Thus, eliminating the negative visual effects and recovering the true scene, which is often referred to as "dehazing," are highly desired and have strong implications.However, dehazing is a challenge problem since the magnitude of degradation is fundamentally spatial-variant.
As a challenging problem, a variety of methods have been proposed to address this task using different strategies.The first category of methods removes haze based on traditional image processing techniques, such as histogram-based [8,9] and Retinex-based methods [10].However, the recovered results may suffer from haze residue and an unpleasing global visual effects, since the adjustment strategies do not consider the spatial relations of the degradation mechanism.A more sophisticated category of haze removal methods attempts to improve the dehazing performance by employing multiples images that are taken in different atmosphere conditions [11][12][13].Although the dehazing effect can be enhanced since extra information about the hazy image is obtained through different atmospheric properties, the limitation of these methods is evident because the acquisition step is difficult to perform.Another category of methods estimates the haze effects using a camera and polarization filter which is identically positioned [14][15][16].But these methods are only valid for mist images, where the polarization light is the major degradation factor [17]; moreover, these methods are normally time-consuming.
Recently, benefiting from the atmospheric scattering model, many state-of-the-art model-based single image dehazing methods have been proposed [11,[18][19][20][21][22][23][24][25][26][27][28].Although significant progress has been made, two main limitations remain.First, the atmospheric scattering model that researchers have adopted is only valid in homogeneous atmosphere conditions, as we noted in Section 2, and therefore these model-based methods commonly lack robustness.Second, many model-based single image dehazing methods that recover clear-day images rely on the error-prone empirical assumption as the atmospheric scattering coefficient due to the estimation difficulty and model complexity and therefore are limited in terms of robustness and effectiveness.
For instance, He et al. [21] proposed the dark channel prior based on the statistical observation, which enables us to directly obtain an approximate estimation of the transmission map.Despite its effectiveness in most cases, this method cannot process inhomogeneous hazy images due to the atmospheric scattering model limitation and may fail for the sky region where the prior is broken.Based on [21], Meng et al. [23] added a boundary constraint and estimated the transmission map via a weighted contextual regularization.However, it is subject to color distortion for the white object, since it cannot fundamentally address the problem of ambiguity between the surface color and haze.Fattal's method [19] is proposed with the assumption that the surface shading factor and transmission functions are statistically uncorrelated in a local patch and estimate the transmission within the segmented scenes which have the constant scene albedo.Although it can achieve an impressive effect when recovering a homogeneous mist image, it fails for dense haze scenes and inhomogeneous hazy images where the assumption is invalid.Tan [18] proposed a novel dehazing method by assuming that the clear-day images have higher contrast than the hazy images; however, results generated using a Markov Random Field (MRF) tend to be oversaturated, since this method is similar to the contrast stretching in general.Based on the Bayesian probabilistic model and the atmospheric scattering model, Nishino et al. [22] jointly estimated the scene albedo and depth by completely leveraging its latent statistical structures.Despite the almost perfect dehazing results for dense hazy images, the results tend to be overenhanced for the mist image.Tarel et al. 's method [20] estimates the veil by using combinations of filters; the advantage of this method is its linear complexity and therefore it can be implemented in real time.Nevertheless, the dehazing effect is prone to be invalid where depth changes drastically, since the median filter involved provides poor edge-preserving performance.Zhu et al. [24] proposed the color attenuation prior, and the depth information can be well estimated via this prior and a linear model; nevertheless, it fails in inhomogeneous atmosphere conditions since the atmospheric scattering model it adopted is invalid.Wang et al. [26] proposed a fusion-based method to remove haze, but haze remains when processing inhomogeneous hazy images because the atmospheric scattering model may be invalid in these cases.Besides, this method is based on wavelet fusion and tends to be invalid for dense hazy images because the ambiguity between the image color and haze cannot be well separated.Although Jiang et al. [25] introduced an efficiency hierarchical method for gray-scale image dehazing, it cannot well handle inhomogeneous scenes and suffers from color distortion due to the limitations of the model, which is similar to [26].
In this paper, we propose an improved atmospheric scattering model and a corresponding single image dehazing method that is aimed at overcoming the two main aforementioned limitations.Compared with the previous methods, major contributions of our method are presented.(1) We propose an improved atmospheric scattering model to address the limitation of the current model, which is only valid under homogeneous atmosphere conditions.By considering the inhomogeneous atmosphere, the proposed model has better properties with respect to the validity and robustness.(2) Based on the proposed model, we create a haze density distribution map and train the relevant parameters using a supervised learning method, which enables us to segment the hazy image into scenes based on the haze density similarity.Therefore, the inhomogeneous atmosphere problem can be effectively converted into a group of homogeneous atmosphere problems.(3) Using the segmented scenes, combined with the proposed scene weight assignment function, we can effectively improve the estimation accuracy of the atmospheric light by excluding most of the potential errors.(4) Few dehazing methods estimate the atmospheric scattering coefficient but simply use the error-prone empirical values due to their difficulty and model complexity; even this problem has been noticed by many researchers.We estimate the atmospheric scattering coefficient via the proposed ASP.
The remainder of this paper is structured as follows.In the next section, we propose the improved atmospheric scattering model based on the limitation analysis of the current model.In Section 3, we present a novel single image dehazing method, which includes three key steps: scene segmentation via a haze density distribution map, improved atmospheric light estimation, and scene albedo recovery via the ASP.In Section 4, we present and analyze the experimental results.In Section 5, we summarize our method.

Improved Atmospheric Scattering Model
In computer vision and computer graphics, most modelbased dehazing methods [11,[18][19][20][21][22][23][24][25][26][27][28] rely on the following atmospheric scattering model, which is formulated under the homogeneous atmosphere assumption [11,12,29,30]: where (, ) is the pixel index, (, ) denotes the hazy image, (, ) =  ⋅ (, ) represents the corresponding clear-day image,  is the atmospheric light, which is a constant value throughout the whole image, (, ) is the scene albedo, and (, ) is the transmission, which is defined as where (, ) is the scene depth and  is the atmospheric scattering coefficient which describes the ability of a unit volume of atmosphere to scatter light in all directions [11,24,29].Note that the atmospheric scattering coefficient  is a fixed scalar in (1), which indicates that the attenuation magnitude is constant throughout the entire hazy image.However, according to [11,19,24,29],  is determined by the density of atmospheric suspended particles in the atmosphere for a particular light wavelength.Consequently, the constant setting of  in ( 1) is only valid in homogeneous atmosphere conditions, as discussed by [21,24,26,29].That is, when we simply regard the atmospheric scattering coefficient  as a constant in inhomogeneous atmosphere conditions, the transmission in some scenes is inevitably prone to be underestimated or overestimated.
In addition, the atmosphere is inhomogeneous in most practical scenarios, since haze has an inherent dynamic diffusion property according to Fick's laws of diffusion [31].As shown in Figure 1, the haze density spatially varies between different boxes within a hazy image.
However, this problem can be relatively alleviated.Although the haze density spatially varies across the entire image, the haze density within a particular local region is approximately the same, since the haze diffusion is physically smooth [11].For instance, the haze density is generally similar within the same boxes in Figure 1.Thus, the inhomogeneous hazy image can be converted into a group of homogeneous scenes based on the haze density similarity, and each scene can be regarded as an independent homogeneous subimage.Based on this notion and inspired by [32,33], we redefine the atmospheric scattering coefficient  in (1) as a scene-wise variable and propose an improved atmospheric scattering model as  (, ) =  ⋅  (, ) ⋅  −()⋅(,) +  ⋅ (1 −  −()⋅(,) ) , (, ) ∈ Ω () , (3) where  is the scene index, Ω() is the pixel index for the th scene, and () is the scene atmospheric scattering coefficient, which is constant within a scene but varies between scenes.With this improvement, we addressed the inherent limitation of the atmospheric scattering model, because all types of hazy images can be precisely modeled.

A Novel Single Image Dehazing Method
In this section, we present a novel single image dehazing method that is based on the proposed improved atmospheric scattering model.In this method, we first create a haze density distribution map to describe the spatial relations of the haze density for a hazy image and further segment the hazy image into scenes based on the haze density similarity.Then, as a by-product of segmentation, we can improve the estimation accuracy of atmospheric light using a proposed scene weight assignment function.Next, based on the proposed ASP and the depth information provided via [24], we can estimate the scene atmospheric scattering coefficient and recover the truth scene albedo.

Definition of a Haze Density Distribution Map.
Based on the improved atmospheric scattering model, we need to segment the hazy image into scenes based on the similarity of the haze density.Thus, the spatial distribution of haze density for a hazy image should be obtained.However, to our knowledge, there does not yet exist a pixel-based nonreference haze density distribution model that is consistent well with the practical judgement of haze density.Choi et al. [34] propose the fog aware density evaluator (FADE), which is a patch-based evaluator to assess the fog density for an entire hazy image or local patch.However, it is a patch-based assessment that is relatively computationally expensive to obtain and therefore cannot be implemented as an intermediate step.Thus, a high efficiency pixel-based strategy for describing the spatial relations of the haze density for a hazy image is required.
According to [34][35][36], the haze density representation is primarily correlated with three measurable statistical features of a hazy image (, ), including the brightness component   (, ), texture details component ∇(, ), and saturation component  ∘ (, ).Thus, inspired by [24], we create a linear model named the haze density distribution map that can be expressed as where (, ) is the haze density distribution map,  1 ,  2 , and  3 are the corresponding unknown parameters for each component.Inspired by [37,38], we should further consider the representation error, such as the quantization error caused by the three components and noise.Thus, we set the total representation error as  4 .According to (4), all the components are combined to yield a description of a haze density distribution.Note that the three components are relatively independent, and the slight deviation of a component will therefore not affect the other components.
To obtain all relevant parameters of the linear model, we employ a supervised learning method.We employ 500 training samples to train this linear model; each training sample consists of a pair of hazy images and the corresponding truth haze density map (in order to prepare the training data, we collect 500 various types of hazy images from the Internet and use them to produce the corresponding truth haze density).Considering the superior accuracy of FADE, we adopt it as the reference for the truth haze density representation.The training strategy is designed as the following: We utilize the gradient descent algorithm to estimate the linear parameters  1 ,  2 ,  3 and  4 .By taking the partial derivatives of (⋅) with respect to  1 ,  2 ,  3 and  4 respectively, we can obtain the following expressions: where |Λ| is the total number of pixels within the training hazy images and   is the pixel index of the training hazy images.The updating procedure of the linear parameters is as follows: where the notation fl indicates that the value of   in the left term is set to be the value of the right term.After training, we obtain the following optimal model parameters (retain four decimal places for precision) as  1 = 0.9313,  2 = 0.1111,  3 = −1.4634,and  4 = −0.0213.The most important advantage of this model is its linear complexity.Once the model parameters have been determined, this model can be used for modeling the haze density distribution of any hazy image.
In Figure 2, we select several inhomogeneous and homogeneous hazy images with various haze densities (see Figure 2(a)) and demonstrate the corresponding haze density distribution maps (see Figure 2(b)).The dark blue areas indicate the thinnest haze scene and the dark red areas represent the densest haze scene, and the color changes from dark blue to dark red along with increasing haze density.Note that the generated haze density distribution maps are visually consistent with the spatial feature of haze density.
However, we note that the haze density distribution map contains excessive texture details.This finding is caused by the depth structure of the scene objects, which may affect the components (brightness, texture details, and saturation) that we adopted to model the density distribution map.The haze density distribution should be flat and independent of any image structure [33].Although the excessive texture details imply the microscopic haze density difference, additional processing will incur extra computational cost.The accuracy will be slightly sacrificed if we eliminate part of the excessive texture details; however, we consider this to be a reasonable trade-off.Thus, we utilize the guided total variation model [33] to refine the haze density distribution map in the following: where  ref is the refined haze density distribution map,  is the weight function,  is the haze density distribution map, and  is the guidance image, which is defined as the haze density distribution map.According to [33], (8) can be expressed and processed using an iteration form, and we have  1 = 1,  2 = 12, and  3 = 1 as the regularization parameters for the approximation term, smoothing term, and edge-preserving term, respectively.By comparing Figures 2(b) and 2(c), we note that the excessive texture details have been significantly eliminated.

Scene Segmentation.
Using the refined haze distribution map  ref , our goal in this step is to segment the map into a group of scenes based on the haze density similarity.After segmentation, pixels within a particular scene should share an approximately identical haze density, which further implies that they have the same scene atmospheric scattering coefficient.This problem is fundamentally similar to data clustering; thus, we convert this segmentation process into a clustering problem and adopt the k-means clustering algorithm [39,40].The clustering procedure can be expressed as arg min where the  is the cluster number, Ω() is the th cluster, and   is the cluster center.After extensive experiments and qualitative and quantitative comparisons (as demonstrated in Section 4), we obtain the relatively balanced cluster number  = 3.The k-means clustering algorithm iteratively forms mutually exclusive clusters of a particular spatial extent by minimizing the mean square distance from each pattern to the cluster center.The difference after the th iteration can be expressed as where  is the iteration index.This iteration procedure stops when a convergence criterion is satisfied, and we adopt the typical convergence criteria [40]: no (or minimal) differences after the th iteration; that is,      () −  ( − 1)     < .
And we set  = 10 −4 to terminate this procedure.Note that because the clustering step optimizes the within-cluster sum of squares (WCSS) objective and there only exist a finite number of such partitions, the algorithm must converge to (local) optimum.However, the segmentation results may exhibit instability or oversegmentation because the k-means clustering algorithm is uncorrelated with the spatial location, and there is no guarantee that the global optimum is obtained.Thus, we further refined it via a fast MRF method [41].Then, we denote the further refined result as  mrf .Figure 3 shows the corresponding results  mrf of Figure 2.

Estimation of Atmospheric Light. The atmospheric light
is an RGB vector that describes the intensity of the ambient light in the hazy image.As discussed by [42], current single image dehazing methods estimate the atmospheric light either by user interactive algorithms [19,24] or based on the most haze-opaque (brightest) pixels [18,21,[43][44][45].Nevertheless, the located brightest pixels may belong to an interference object, such as an extra light source, white/gray objects, and high-light noise.As demonstrated in Figure 4,   [21] is depicted in the green box, the result of [15] is depicted in the blue box, and our result is depicted in the red outlined areas.
both He et al. 's method [21] (in the green box) and Namer et al. 's method [15] (in the blue box) locate the interference object as the atmospheric light in a challenging hazy image.
As a by-product of scene segmentation, we can cope with the challenging task by designing a scene weight assignment function.Using this function, we can locate a candidate scene that excludes most of the interference objects.This function is designed based on three basic observations: (1) The probability that a scene contains the most hazeopaque (brightest) pixels is proportional to the haze density [18,21,44,45].This can be inferred according to the atmospheric scattering model; when the haze density is infinite for a pixel, the relevant pixels will be reduced to the atmospheric light.
(2) The most haze-opaque (brightest) pixel belongs to the sky region with higher probability, and interference objects, such as rivers, extra light sources, and road, are primarily located spatially lower than the sky scene.Thus, we can avoid these types of interference objects by considering the scene vertical index.
(3) Most existing dehazing methods are not suitable for white/gray interference objects (cars, animals, etc.) because they are not sensitive to the white/gray color [24].However, the scene coverage ratio for these objects is significantly smaller than the scene coverage ratio for a sky scene.
Accordingly, we assign the weight to each segmented scene in  mrf by considering the scene haze density, scene average height, and scene coverage ratio.Thus, the scene weight assignment function is defined as where res is the resolution of the hazy image and   and |Ω  | are the scene haze density and pixel number, respectively, for each segmented scene.Based on (12), each segmented scene will be assigned a weight, and we take the scene with the top weight as the candidate scene   .In addition, to further eliminate the affection from high-light noise, we locate the top 0.1% brightest pixels as the potential atmospheric light within the candidate scene   and take the average value of these pixels as the atmospheric light.
As shown in Table 1, we list the assigned weight (retain four decimal places for precision) of each scene (scenes 1, 2, and 3) for Figure 3 and depict the located potential atmospheric light in the red outlined areas in Figure 5.We successfully located the atmospheric light and avoid most of the interference objects, as expected.
We tested our method on the same challenge hazy image (Figure 4) and here depict our results (the red outlined areas in Figure 4).By comparison, we demonstrate the advantage of our method.

Average Saturation Prior.
Hazy images often lack visual vividness because the scene contents are extremely blurred with reduced contrast and faint surface colors.Inherently inspired by [21,24], we conduct a number of experiments on varied types of hazy images and high-definition clear-day outdoor images to identify statistical regularities.
Interestingly, as an experimental demonstration shown in Figure 6, we notice that the RGB histograms of a hazy image are almost identically distributed (see Figure 6(c)).Conversely, the RGB histograms of a high-definition clearday outdoor image (the same scene) are significantly distinguishable (see Figure 6(d)).As shown in Figure 6(c), we also notice that the hazy image contains nearly zero pixels that are black (RGB 0, 0, and 0)/white (RGB 1, 1, and 1), whereas the high-definition clear-day outdoor image includes numerous pixels (see Figure 6(d)).
The observations (on hazy image RGB histograms) indicate that most pixels in a hazy image are extremely similar, therefore causing poor visibility and vice versa.We infer that this observation will contribute to statistical regularities for an average saturation distribution; thus we perform extensive tests on various types of hazy images and high-definition clear-day outdoor images.
Similar to [21,24], we collect a large number of hazy images and high-definition clear-day outdoor images from the Internet using several search engines (with the keywords hazy image and high-definition clear-day outdoor images).Then, we randomly select 2,000 hazy images and obtain the average saturation probability distribution (see Figure 7(a)).Next, we select 4,000 high-definition clear-day outdoor images with landscape and cityscape scenes (where haze usually occurs) and manually cut out the sky regions (considering the similarity between the sky region and the hazy    The average saturation probability distribution of hazy images, as shown in Figure 7(a), is distinctly concentrated at approximately 0.005 (more than 40% at 0.005 and with a cumulative probability of more than 70% from 0 to 0.01).This finding indicates that few pixels are nearly pure white or black, which confirms our second observation on Figure 6(c).Thus, this result strongly suggests that the average saturation for a hazy image tends to be a very small value (0 to 0.01 with an overwhelming probability).
The average saturation probability distribution of highdefinition clear-day outdoor images is demonstrated in Figure 7(b).We compute the expectation of the average saturation; the results indicate that the average saturation for a high-definition clear-day outdoor image tends to be 0.106 with a high probability.As demonstrated in Section 4, to further evaluate this conclusion, we select another six possible average saturation values and further compare the dehazing effect both qualitatively and quantitatively on another 200 various types of hazy images.

Scene Atmospheric Scattering Coefficient Estimation and Scene
Albedo Recovery.To our knowledge, most exiting dehazing methods take the error-prone empirical value as the atmospheric scattering coefficient.Despite the valuable progress that has been made to overcome this problem, there is likely no optimal solution for all types of hazy images (even homogeneous images) due to the variation in haze density.For instance, Zhu et al. tested numerous atmospheric scattering coefficients values in [24] to pursue an optimal solution; however, the atmospheric scattering coefficient is simply assumed to be 1 in this method.Shi et al. [46] also tried to address the problem by considering the impact of Earth's gravity on the atmospheric suspended particles; however, the dehazing results tend to be unstable [33].By combining the proposed ASP with the improved atmospheric scattering model, we can effectively estimate the scene atmospheric scattering coefficient within each scene.We derive (3) as Note that (, ) is given and  is estimated in Section 3.2; the scene albedo (, ) is now a function with respect to the scene atmospheric scattering coefficient () and the scene depth (, ).Due to significant progress in estimating the scene depth [22,24], we assume that the scene depth (, ) is given by [24].Therefore, the scene albedo (, ) is a function only with respect to the scene atmospheric scattering coefficient ().For convenience of expression, we rewrite (13) as Next, based on the proposed ASP, we can obtain the scene atmospheric scattering coefficient () as where (⋅) is the average saturation computing function.Note that ( 15) is a convex function, and we can obtain the optimal solutions of the scene atmospheric scattering coefficient using the golden section method [47] and set the termination criteria to 10 −4 according to [48,49].Once we estimate the scene atmospheric scattering coefficient () for all scenes, the corresponding scattering map is obtained.Considering that the scene atmospheric scattering coefficient estimation is inherently a scene-wise process, we utilize the guided total variation model [33] to increase the edge-consistency property.Figure 8 shows four example demonstrations of hazy images (Figures 8(a) and 8(b) are homogeneous hazy images, and Figures 8(c) and 8(d) are inhomogeneous hazy images) and the obtained corresponding scattering maps.It can be noticed that the scattering maps are consistent well with the corresponding haze images.
According to (3), we can directly obtain the scene albedo (, ), since all the unknown coefficients are determined, including the atmospheric light A, the scene atmospheric scattering coefficient (), and the scene depth (, ).Then, the clear-day image can be recovered as (, ) =  ⋅ (, ).

Experiments
Given hazy image with  pixels and segmented into  scenes after jth iteration, the computational complexity for the proposed method is (), when the linear parameters   1 ,  2 ,  3 , and  4 in ( 4) are obtained via training.In our experiments, we implemented our method in MATLAB, and approximately 1.9 seconds is required to process a 600 × 400 pixels' image using a personal computer with 2.6 GHz Intel(R) Core i5 processor and 8.0 GB RAM.
In this section, we first demonstrate the experimental procedure for determining the clustering number in (10).Then we demonstrate the validity of the proposed ASP through qualitative and quantitative experimental comparisons.Next, in order to verify the effectiveness and robustness of the corresponding dehazing method, we test it on various realworld hazy images and conduct a qualitative and quantitative comparison with several state-of-the-art dehazing methods, such as those by Tarel et al. [20], Zhu et al. [24], He et al. [21], Ju et al. [33], and Meng et al. [23].The parameters in our method are all demonstrated in Section 3, and the parameters in the five state-of-the-art dehazing methods are set to be optimal according to [20,21,23,24,33] for fair comparison.
For quantitative evaluation and comparison, we adopt several extensively employed indicators, including the percentage of new visible edges , contrast restoration quality , FADE , and the hue fidelity .According to [50], indicator  measures the ratio of edges that are newly visible after restoration, and indicator  verifies the average visibility enhancement obtained by the restoration.The indicator  is proposed by [34], which is an assessment of haze removal ability.The indicator  is presented by [51], which is a statistical metric to indicate the hue fidelity after restoration.Higher values of  and  imply better visual improvement after restoration, lower values of  indicate less haze residual (which means a better dehazing ability), and a smaller value of  indicates that the dehazing method maintains better hue fidelity.segment it into a group of scenes based on the haze density similarity.To determine a relatively balanced clustering number k, we conduct a large number of experiments on different hazy images using different values of clustering number .Then, we compared the dehazing effect in terms of the qualitative comparison, computational time, and quantitatively comparison using three indicators (, , and ). Figure 9 shows five example experimental demonstrations of qualitative comparison using different clustering number k, and Figures 10-13 show the corresponding quantitative comparison results of , ,  and computational time, respectively.Through qualitative comparison, we find that the dehazing effect improves when  increases from 1 to 3 and tends to stabilize afterwards.As we can see, when  equals 1 (which implies removing haze using the current atmospheric scattering model ( 1)), the haze residual is obvious (see Figure 9  3 (see Figure 9(d)), haze is completely removed and the details of the scenes are adequately restored, the recovered color is nature and visually pleasing, and no overenhancement appears.However, the dehazing effect tends to be the same; even  continues to increases (see Figure 9

(d), compared with Figures 9(e)-9(j)).
This observation is consistent with the quantitative comparison results, as shown in Figures 10-12; when  increases from 1 to 3, the values of  and  rise (see Figures 10 and 11), which means more edges are recovered and better visibility enhancement is obtained.The value of  decreases obviously (see Figure 12), and it implies that more haze is removed when  increases from 1 to 3. Despite the increased computational time, as shown in Figure 13, we think it is a reasonable tradeoff for better dehazing effect.Afterwards, with the increased clustering number (from 3 to 9), the values of  and  tend to be stable, and the value of  fluctuates and even rises slightly, as shown in Figures 10-12.Meanwhile, the computational time rises along with the increasing clustering number.
The observations for the five example experimental demonstrations are consistent with the results of more than 200 experiments.Consequently, we assume a clustering number  of three is a balanced choice for our method.

Experimental Comparison for ASP. In Section 3.3, we
propose the ASP based on the statistics of extensive highdefinition clear-day outdoor images, and the results indicate an average saturation of 0.106 with high probability for a high-definition clear-day outdoor image.To further verify the validity of this conclusion, we test and compare the dehazing effect using different values of the average saturation (0.01, 0.05, 0.106, 0.15, 0.2, 0.25, and 0.3) on another 200 hazy images.The four example experimental demonstrations are depicted in Figure 14.Through qualitative comparison, it is obvious that the dehazing magnitude is approximately proportional to the average saturation value, especially when average saturation value rises from 0.01 to 0.15 (see Figures 14(b)-14(e)).However, when average saturation value goes beyond 0.15, the recovered image looks dim and color tends to be unnatural (see Figures 14(f)-14(h), the close-range scene in Test 1 and Test 2, the upper left corner and middle part in Test 3, and long-range scene in Test 4).When the average saturation equals 0.106, as shown in Figure 14(d), our method unveils most of the details, recovers vivid color information, and avoids overenhancement, with minimal Halo artifacts.
The corresponding quantitative comparisons of Figure 14 are shown in Figures 15-18.In addition to , , and , we also measure and compare the value of indicator .
As shown in Figures 15 and 16, when average saturation value rises from 0.01, the values of  and  increase and tend to achieve a high value when average saturation value equals 0.106 and fluctuate slightly afterwards.This indicates more edges newly visible are obtained and better visual effect is enhanced when average saturation value achieves 0.106.This observation is consistent with Figure 17; the value of  declines significantly and tends to be stable when average saturation value equals 0.106, which implies the best haze removal effect can be achieved when average saturation value achieves 0.106.However, as shown in Figure 18, the value of     stays at a low level and increases dramatically when the average saturation value exceeds 0.106, which means the color distortion appears inevitably.These observations on the four example experimental demonstrations are in consistency with most of the rest experimental results; thus the ASP is physically valid and is able to well handle various types of hazy image.

Qualitative Comparison.
Considering that the five stateof-the-art dehazing methods are able to generate perfect results using hazy images, a visual ranking of the methods is therefore difficult to complete.Thus, we select six challenging images, including a homogeneous dense haze image (Figure 19  Obviously, this is because the atmospheric scattering model adopted is invalid under inhomogeneous atmosphere condition.
Due to the inherent problem of dark channel prior, He et al. 's method cannot be applied to the regions where the brightness is similar to the atmospheric light (the sky region in the Figure 21(d) is significantly overenhanced).Moreover, similar with Zhu et al. 's method, He et al. 's method tends to be unreliable when processing inhomogeneous hazy images.As we can see from the zoom-in patches of Figures 23(d [21] and further improves the dehazing effect by adding a boundary constraint, but the problem of ambiguity between the image color and haze still exits and therefore fails for the sky region in Figure 21(f In contrast, our method removes most of the haze and well unveils the scene objects, maintains the color fidelity, and eliminates the overenhancement, with minimal Halo artifacts.Note that, by taking advantage of the proposed improved atmospheric scattering model, our method is effective for both homogeneous and inhomogeneous hazy images.

Quantitative Comparison.
To quantitatively assess and rate the five state-of-the-art dehazing methods and our method, we compute four indicators (, , , and ) for the dehazing effects of Figures 19-24 and list the corresponding results in Tables 2-5.For convenience, we indicate the top value in bold and italics and the second-highest values in bold.
According to     Because the indicator  correlates well with human judgements of fog density [34], we compute the values of the indicator  for all dehazing results and list them in Table 5.As shown in Table 5, our method outperforms other methods for Figures 20-24 and has the second-best value for Figure 19.This finding verifies the outstanding dehazing effect of our method, and this conclusion is consistent with our observation of the qualitative comparison.Importantly, we prove the power of our method for dehazing inhomogeneous hazy images.We attribute this advantage to the proposed improved atmospheric scattering model and the corresponding dehazing method.
In Table 6, we provide a comparison of the computational times.Note that our method is significantly faster than most of the other methods and relatively close to the computation time of Zhu et al. 's method.The high efficiency of our method is primarily attributed to the linear model, which describes the haze density distribution and therefore simplifies the estimation procedure using a scene-based method instead of a per-pixel or patch-based strategy.

Discussion and Conclusions
In this paper, we have proposed an improved atmospheric scattering model to overcome the inherent limitation of the current model.This improved model is physically valid and has an advantage with respect to effectiveness and robustness.Based on the proposed model, we further improve the effectiveness of the corresponding single image dehazing method, since we abandoned the assumption-based atmospheric scattering coefficient but estimated it via the proposed ASP.
In this method, by means of the proposed haze density distribution map and the scene segmentation, the inhomogeneous problems can be converted into a group of homogeneous ones.Then, we further propose the ASP based on statistics of extensive high-definition outdoor images and first estimate the scene atmospheric scattering coefficient via ASP.Next, as a by-product of scene segmentation, we effectively increase the estimation accuracy of the atmospheric light by defining a scene weight assignment function.
Experimental results verify the robustness of the proposed improved atmospheric scattering and the effectiveness of the corresponding dehazing method.
Although we have overcome the inherent limitation of the current atmospheric scattering model and have identified a method for estimating the scene atmospheric scattering coefficient based on the proposed ASP, a problem remains unsolved.Despite extensive experimental assessment and comparison, finding the optimal solution for scene segmentation (the clustering problem) is a difficult mathematical task due to the variety of hazy images.To address this task, some machine learning methods can be considered, and we leave this problem for our future research.

Figure 1 :
Figure 1: Examples of hazy images with inhomogeneous atmosphere.Haze density is approximately the same within a box but varies between boxes.

Figure 2 :Figure 3 :
Figure 2: (a) Various types of inhomogeneous and homogeneous hazy images.(b) Corresponding haze density distribution maps.(c) Relevant refined haze density distribution maps.
Our method He et al. 's method Namer et al. 's method

Figure 4 :
Figure4: A challenging hazy image for the atmospheric light locating.The result of[21] is depicted in the green box, the result of[15] is depicted in the blue box, and our result is depicted in the red outlined areas.

Figure 5 :
Figure 5: Located potential atmospheric light using our method (in the red outlined areas).

Figure 7 :
Figure 7: (a) Average saturation probability distribution of hazy images.(b) Average saturation probability distribution of high-definition clear-day outdoor images.

Figure 8 :
Figure 8: Hazy images (homogeneous and inhomogeneous) and the corresponding scattering map.

Figure 9 :
Figure 9: Five example experimental demonstrations of qualitative comparison using different clustering number .(a) Hazy image.(b-j) Left to right: the recovered images using the value of  from 1 to 9, respectively.

Figure 11 :
Figure 11: Values of  using different clustering number k.

Figure 12 :Figure 13 :
Figure 12: Values of  using different clustering number k.

Figure 17 :Figure 18 :
Figure 17: Values of  using different average saturation values.
(a)), a homogeneous image with large white or gray regions (Figure 20(a)), a homogeneous image with a sky region (Figure 21(a)), a homogeneous image with rich texture details (Figure 22(a)), an inhomogeneous long-range image (Figure 23(a)), and an inhomogeneous close-range image (Figure 24(a)).Figures 19-24 demonstrate the qualitative comparison of the five state-of-the-art dehazing methods with our method.The original hazy images are displayed in column (a); columns (b) to (g), from left to right, depict the dehazing results and the corresponding zoom-in patches of the methods of Tarel et al., Zhu et al., He et al., Ju et al., Meng et al., and our method, respectively.As shown in Figure 19(b), Tarel et al. 's methods are obviously unable to process dense hazy image.This is due to the fact that Tarel et al. 's method uses a geometric criterion to decide whether the observed white region belongs to the haze or the scene object; thus it is unreliable under dense haze condition.In Figures 20(b) and 22(b), we can notice that Tarel
) and 24(d), haze cannot be removed globally.Despite Ju et al. 's method getting really good result, the overexposure (see the zoom-in patches of Figures 22(e) and 24(e)) and color distortion effects (see the upper part of Figures 22(e) and 23(e)) appear since the transmission estimation method is parameter sensitive.As shown in Figure 20(e), Ju et al. 's method recovers the most scene objects but suffers from the overenhancement.Meng et al. 's method is based on[21] and further improves the dehazing effect by adding a boundary constraint, but the problem of ambiguity between the image color and haze still exits and therefore fails for the sky region in Figure21(f).In addition, Meng et al. 's results significantly suffer from the overall color distortion, as illustrated in Figures 21(f), 22(f), and 23(f).In contrast, our method removes most of the haze and well unveils the scene objects, maintains the color fidelity, and eliminates the overenhancement, with minimal Halo artifacts.Note that, by taking advantage of the proposed improved atmospheric scattering model, our method is effective for both homogeneous and inhomogeneous hazy images.
) and the second top values for Figures19 and 22, which verify the validity of the proposed atmospheric scattering model and the effectiveness of our method.Although we only obtain the third top values for Figures 20 and 21, our results are more visually pleasing.Although Ju et al. 's and Tarel et al. 's results achieve the top and the second top values for Figures 20 and 21, overenhancement is evident in Figures 20(b) and 20(e), haze is significant in Figure21(b), and the corners of the sky region in Figure21(e) tend to be dark.As shown in Table the ability for dehazing methods to maintain the color fidelity can be assessed through these results.He et al. 's results get the best values for all the dehazing results, our results achieve the second-best values for three hazy images (Figures19, 21 , and 23), and our results are very closed to the second-best score for Figures20 and 22.Thus, our method can maintain the color fidelity generally for most of the challenge hazy images.However, this indicator may only partially reveal the ability of a dehazing method and are not sensitive to the overenhancement.For instance, He et al. 's results suffer from overenhancement (refer to Figure21(d)); Tarel et al. 's results are overenhanced for Figure22and achieve the second-best score.Thus, exploration of an integrated indicator, which is consistent with human visual judgement, is necessary. ).

Table 2 :
Value of indicator  for the dehazing of Figures19(a)-24(a) using different methods.

Table 3 :
Value of indicator  for the dehazing results of Figures19(a)-24(a) using different methods.

Table 4 :
Value of indicator  for the dehazing results of Figures 19(a)-24(a) using different methods.

Table 2 ,
our results yield the top value for Figure24, which is a typical inhomogeneous hazy image.Although our results only achieve the second top value for Figures 19, 20, 22, and 23, the results must be balanced because the number of recovered visible edges can cause noise amplification.For instance, Tarel et al. 's results have the highest value in Figures 19-24, but the relevant visual effects are either overenhanced or suffer from Halo artifacts.Conversely, our results avoid most of the negative effects.As shown in Table 3, our dehazing results achieve the top values for both inhomogeneous hazy images (Figures 23 and 24

Table 5 :
Value of indicator  for the dehazing results of Figures19(a)-24(a) using different methods.