Haar-Wavelet-Based Just Noticeable Distortion Model for Transparent Watermark

Watermark transparency is required mainly for copyright protection. Based on the characteristics of human visual system, the just noticeable distortion JND can be used to verify the transparency requirement. More specifically, any watermarks whose intensities are less than the JND values of an image can be added without degrading the visual quality. It takes extensive experimentations for an appropriate JND model. Motivated by the texture masking effect and the spatial masking effect, which are key factors of JND, Chou and Li 1995 proposed the well-known full-band JND model for the transparent watermark applications. In this paper, we propose a novel JND model based on discrete wavelet transform. Experimental results show that the performance of the proposed JND model is comparable to that of the full-band JND model. However, it has the advantage of saving a lot of computation time; the speed is about 6 times faster than that of the full-band JND model.


Introduction
Watermarking is a process that hides information into a host image for the purpose of copyright protection, integrity checking, or captioning 1-3 .In order to achieve the transparency of watermark, many commonly used techniques are based on the characteristics of human visual system HVS 1-13 .Jayant et al. 14, 15 introduced a key concept known as the just noticeable distortion JND , against which insignificant errors are not perceptible by human eyes.The JND of an image is in general dependent on background luminance, contrast of luminance, and dominant spatial frequency.It takes extensive experimentations to obtain an appropriate JND model.
Perceptual redundancies refer to the details of an image that are not perceivable by human eyes and therefore can be discarded without affecting the visual quality.As noted, human visual perception is sensitive to the contrast of luminance rather than their individual values 16-18 .In addition, the visibility of stimuli can be reduced by nonuniformly quantizing the background luminance 18-20 .The above known as the texture masking effect and the spatial masking effect are key factors that affect the JND of an image.Chou and Li proposed an effective model called the full-band JND model for the transparent watermark applications 21 .
Wavelet transform provides an efficient multiresolution representation with various desirable properties such as subband decompositions with orientation selectivity and joint space-spatial frequency localization.In wavelet domain, the higher detailed information of a signal is projected onto the shorter basis function with higher spatial resolution; the lower detailed information is projected onto the larger basis function with higher spectral resolution.This matches the characteristics of HVS.Many wavelet-transform-based algorithms were proposed for various applications 22-34 .In this paper, we propose a wavelet-transform-based JND model for the watermark applications.It has the advantage of saving a lot of computation time.The remainder of the paper proceeds as follows.In Section 2, the full-band JND model is reviewed briefly.In Section 3, the discrete-wavelet-transform-DWT-based JND model is proposed.The modified DWT-based JND model and its evaluation are presented in Section 4. Conclusion can be found in Section 5.

Review of the Full-Band Just Noticeable Distortion (JND) Model
The full-band JND model 21 makes use of the properties of the HVS to measure the perceptual redundancies of an image.It produces the JND file for image pixels as follows: JND i, j max f 1 bg i, j , mg i, j , f 2 bg i, j , where where f 1 i, j and f 2 i, j are the texture mask and the spatial mask, respectively, as mentioned in Section 1, bg i, j is the average background luminance obtained by using a low-pass filter, B w, z , is given in Figure 1, mg i, j is the maximum gradient obtained by using a set of high-pass filters, G k w, z , k 1, 2, 3, 4, are given in Figure 2, functions α • and β • are dependent on the average background luminance, and p i, j is the luminance value at pixel position i, j .

Discrete-Wavelet-Transform-Based JND Model
In this section, we propose a novel JND model based on discrete wavelet transform.It has the advantage of reducing computational complexity significantly.

Discrete Wavelet Transform
Discrete wavelet transform DWT provides an efficient multiresolution analysis for signals, Specifically, any finite energy signal f x can be written by where denotes the resolution index with larger values meaning coarser resolutions, n is the translation index, ψ x is a mother wavelet, φ x is the corresponding scaling function, n is the scaling coefficient representing the approximation information of f x at the coarsest resolution 2 J , and D n is the wavelet coefficient representing the detail information of f x at resolution 2 .Coefficients S n and D n can be obtained from the scaling coefficient S −1 n at the next finer resolution 2 −1 by using 1-level DWT, which is given by and •, • denotes the inner product.It is noted that h n and g n are the corresponding low-pass filter and high-pass filter, respectively.Moreover, S −1 n can be reconstructed from S n and D n by using the inverse DWT, which is given by where h n h −n and g n g −n .For image applications, 2D DWT can be obtained by using the tensor product of 1D DWT.Among wavelets, Haar wavelet 22 is the simplest one, which has been widely used for many applications.The low-pass filter and high-pass filter of Haar wavelet are as follows:

3.4
Figures 3 and 4 show the row decomposition and the column decomposition using Haar wavelet, respectively.Notice that the column decomposition may follow the row decomposition, or vice versa, in 2D DWT:

3.5
where A, B, C, and D are pixel values, and LL, LH, HL, and HH denote the approximation, detail information in the horizontal, vertical, and diagonal orientations, respectively, of the input image.Figure 5 shows 1-level, 2D DWT using Haar wavelet.
The LL subband of an image can be further decomposed into four subbands: LLLL, LLLH, LLHL, and LLHH at the next coarser resolution, which together with LH, HL, and HH forms the 2-level DWT of the input image.Thus, higher level DWT can be obtained by decomposing the approximation subband in the recursive manner.

Proposed DWT-Based JND Model
As mentioned in Section 2, the full-band JND model 21 consists of 2.1 -2.8 , which is essentially computation consuming.In order to simplify computational complexity, a novel JND model based on Haar wavelet is proposed as follows.
where f 1,DWT i, j and f 2,DWT i, j are the proposed texture mask and spatial mask, respectively, based on DWT, which are given by where α LLLL i, j LLLL i, j × 0.0001 0.115, 3.9

3.13
LLLL is the modified LLLL, which replaces the low-pass filter B w, z of the full-band JND model.D i i, j replaces the maximum gradient, mg i, j , which is given by

Modification of the DWT-Based JND Model
In this section, we introduce an adjustable parameter to modify the DWT-based JND model such that the computation time can be reduced significantly while the performance is comparable to that of the benchmark full-band JND model.The test images, namely, Lena, Cameraman, Baboon, Board, and Peppers are shown in the first row of Figure 12.

Evaluation of JND Models
Figure 6 shows the distortion-tolerant model, which can be used to evaluate JND models.It takes the JND value as noise, adds to the original image, and computes the peak-signal-tonoise ratio PSNR defined as where MSE is the mean squared error, which is defined as where image size is m × n.As shown in Table 1, the proposed DWT-based JND model is somewhat different from the benchmark full-band JND model in terms of the PSNR values.

Modified DWT-Based JND Model
Based on 2.1 -2.3 and 3.6 -3.8 , one can examine the influences of the dominant mask, texture mask, and spatial mask for the full-band JND model and the DWT-based JND model, respectively.Their respective MSE values are shown in Table 2.As one can see, the proposed texture mask for the DWT-based JND model is less significant than that of the full-band JND model.Thus, we propose an adjustable parameter, K, to modify 3.17 and 3.18 as follows:      where SL R and SL L replace SL R and SL L , respectively.Figures 7,8,9,10,and 11 show the MSE values obtained by modifying the DWT-based texture mask with various K in 4.3 .In this paper, the adjustable parameter K is set to 4 after extensive simulations.The performance of the modified DWT-based JND model with K 4 is comparable to that of the full-band JND model in terms of the PSNR and MSE values as shown in Tables 1 and 2, respectively.
Figure 12 shows the noisy images obtained by adding the JND values to the original images Figure 12

Computation Complexity of the Proposed JND Models
In the full-band JND model, the computation of bg i, j requires 9 multiplications per pixel, the computation of mg i, j requires 28 multiplications per pixel, and the computation required for 2.2 -2.5 is 6 multiplications per pixel.Thus, for an n × n image, it requires   43n 2 multiplications.In the proposed DWT-based JND model, the computations of LH , HL , LLLL , SL R , and SL L require 1/4 n 2 , 1/4 n 2 , 5/16 n 2 , 1/4 n 2 , and 1/4 n 2 multiplications, respectively, for an n × n image, and the computation required for 3.7 -3.10 is also 6 multiplications per pixel.Thus, for an n × n image, it requires 7.3125n 2 multiplications.In the modified DWT-based JND model, as the computations of SL R and SL L require no multiplication, it only needs 6.8125n 2 multiplications for an n × n image.Figure 13 shows the log plot of numbers of multiplications required for the three JND models versus different image sizes.As a result, the speed of the DWT-based JND model is about 6 times faster than that of the full-band JND model, which is the main advantage.

Conclusion
In this paper, an efficient DWT-based JND model is presented.It has the advantage of saving a lot of computation time while the performance is comparable to the benchmark full-band JND model.More specifically, the computation complexity of the proposed DWT-based JND model is only one sixth of that of the full-band JND model.As a result, it is suitable for the real-time applications.

Figure 6 :
Figure 6: Distortion-tolerant evaluation model for the proposed JND model.

Figure 7 :
Figure 7: MSE values obtained by modifying the DWT-based texture mask using 4.3 with various K for Lena image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.

Figure 8 :
Figure 8: MSE values obtained by modifying the DWT-based texture mask using 4.3 with various K for Cameraman image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.

Figure 9 :
Figure 9: MSE values obtained by modifying the DWT-based texture mask using 4.3 with various K for Baboon image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.

Figure 10 :
Figure 10: MSE values obtained by modifying the DWT-based texture mask using 4.3 with various K for Board image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.

Figure 11 :
Figure 11: MSE values obtained by modifying the DWT-based texture mask using 4.3 with various K for Peppers image; dashed line: MSE value obtained by using the texture mask of the full-band JND model.
a using the full-band JND model Figure 12 b , the DWT-based JND model Figure 12 c , and the modified DWT-based JND model Figure 12 d .It is noted that the images in the second and fourth rows are almost indistinguishable from the original images.As a result, the modified DWT-based JND model is visually comparable to the fullband JND model.

Figure 12 :
Figure 12: a The original images, namely, Lena, Cameraman, Baboon, Board, and Peppers; b , c , and d the noisy images obtained by adding the full-band JND values, the DWT-based JND values, and the modified DWT-based JND values, respectively.

Figure 13 :
Figure 13: Log plot of numbers of multiplications required for the three JND models versus different image sizes.

Table 1 :
PSNR comparisons of the benchmark full-band JND model, the proposed DWT-based JND model, and the modified DWT-based JND model.

Table 2 :
The MSE values due to the dominant mask case 1 , the spatial mask case 2 , and the texture mask case 3 using the full-band JND model, the DWT-based JND model, and the modified DWT-based JND model.