An Adaptive Visible Watermark Embedding Method based on Region Selection

Aiming at the problem that the robustness, visibility, and transparency of the existing visible watermarking technologies are diﬃcult to achieve a balance, this paper proposes an adaptive embedding method for visible watermarking. Firstly, the salient region of the host image is detected based on superpixel detection. Secondly, the ﬂat region with relatively low complexity is selected as the embedding region in the nonsalient region of the host image. Then, the watermarking strength is adaptively calculated by considering the gray distribution and image texture complexity of the embedding region. Finally, the visible watermark image is adaptively embedded into the host image with slight adjustment by just noticeable diﬀerence (JND) co-eﬃcient. The experimental results show that our proposed method improves the robustness of visible watermarking technology and greatly reduces the risk of malicious removal of visible watermark image. Meanwhile, a good balance between the visibility and transparency of the visible watermark image is achieved, which has the advantages of high security and ideal visual eﬀect.


Introduction
Visible watermarking technology has important applications in many fields, such as content protection [1,2], copyright identification [3], document security [4,5], and advertising [6]. In the past two decades, there have been a large number of algorithms with different features in visible watermarking technology. ese technologies can be divided into three categories [7]: permanent [8][9][10][11], removable [12][13][14], and reversible visible watermarks [15][16][17][18][19][20]. In the permanent visible watermarking, the embedded watermark image is permanently retained in the watermarked image, and even the owner cannot completely erase it. However, in removable and reversible visible watermarking techniques, the embedded visible watermark image can be removed by the authorized person using the correct secret key. In addition, the reversible technology can completely remove the visible watermark image and recover the original host image data without loss. Regardless of the type, the visible watermarking technology must meet three requirements [21]: robustness, visibility, and transparency. Robustness means that it is difficult for the unauthorized person to remove the visible watermark image maliciously or unintentionally through conventional image processing methods, thus having the ability to resist various destructive attacks. Additionally, the visible watermark image has certain visibility after embedding into the host image, which allows easy and clear identification of the ownership of the host image. Finally, the visible watermark image has the least obtrusiveness or maximum transparency, which allows the observer to easily identify the details of the host image. In the existing algorithms, these three requirements are contradictory; however, it is necessary to achieve a balance between them.
Nowadays, many scholars have done many researches on how to effectively remove the visible watermark image from the watermarked image [22][23][24][25][26][27][28][29][30]. Some visible watermark removal algorithms need to know the accurate position of the visible watermark image in advance, and the corresponding removal strategy is designed by combining the features of the visible watermark image itself [22,23]. Another method to remove the watermark image is based on the traditional image inpainting method [24][25][26], and it mainly uses the surrounding information to fill the areas corresponding to the black pixels in the watermarked image. erefore, only by knowing the specific location of the missing pixels in the image can the visible watermark image be successfully removed from the watermarked image. Pei and Zeng [27] propose to separate the host image from the watermarked image by using independent component analysis (ICA). However, such a method requires users to mark the watermark area manually, which is time-consuming. Clearly, it cannot process large watermarked areas and automatically remove visible watermark images in batches. To achieve batch removal of visible watermark images, Dekel et al. [28] propose a method to automatically estimate the watermark images and restore the host image with high precision. However, this method has a basic premise; that is, when all visible watermark images are embedded in many different host images in the same way, an effective model of visible watermarking image removal can be established. When the visible watermarking images are added to different positions on different images from different angles, the existing algorithms cannot work well. erefore, to improve the robustness of the visible watermarking scheme against batch processing attacks, it is necessary to study the adaptive selection strategy of embedding region for visible watermarking. In addition, the visibility and transparency of the visible watermark image also should be comprehensively considered to obtain a more natural image fusion effect. e rest of the paper is organized as follows. e related works are introduced in Section 2. Section 3 presents the embedding scheme of the visible watermark images in detail. Section 4 shows the experimental results as well as the comparisons with prior works. Finally, Section 5 concludes this paper.

Related Works
When the embedding position of the visible watermark image changes adaptively with the content of the host image, the existing batch processing methods of visible watermarking cannot accurately locate the watermarked area and cannot effectively erase the visible watermark image. For this purpose, Qi et al. [29] propose an improved visible watermark embedding scheme based on the human visual system (HVS) and the region of interest (ROI) selection. is method often finds the relatively smooth region according to the complexity of the host image in the high-tone or lowtone image areas, and these regions usually contain the key objects in the host image. erefore, the visible watermark image frequently occupies the salient area and occludes the important objects of the host image, which usually produces undesirable visual effects.
In terms of the visibility and transparency of visible watermark images, some scholars have proposed many adaptive embedding methods of visible watermark images [30][31][32][33]. In [30], the brightness and texture features of host image in DCT frequency domain are extracted to realize the dynamic embedding of the visible watermark. In [31], the visible watermark is dynamically embedded according to JND coefficients in DCT frequency domain. However, the neighbourhood features of the visible watermarked area are not considered in the process of watermark embedding. e watermark strength of visible watermarking also needs to be adaptively changed according to different image features. In [32], the authors point out that it is necessary to dynamically adjust the embedding strength according to the brightness, contrast, texture complexity, and other related features of the host image. In [33], the visual salience matrix of the host image is calculated first, and the embedding strength is proportional to the intensity of the salience of the visible watermarked region. However, the computational complexity of this method is relatively high.

Proposed Method
In most of the visual scenes, the human visual system can see any region in the image, but the important regions of interest only account for a small part, which is called the salient area [34]. erefore, the visible watermark image should not obscure the salient objects in the host image; otherwise, it will affect the value of the host image itself. In this paper, the salient areas are detected firstly. en, it selects the most suitable region in the nonsalient areas for visible watermark embedding. At last, the watermark strength is adaptively calculated for the visible watermarking.

Salient Region Detection.
Firstly, an image is segmented into a series of superpixels. en, all corner points of the image are extracted, and the image is divided into inner and outer parts according to the corner point distribution. Next, the average score of all pixels in each superpixel is calculated and regarded as the score of the superpixel. Finally, all superpixel scores are normalized to the range of [0, 255], and a complete visual saliency map is formed. e specific process is as follows.

Superpixel Segmentation.
Since there is a lot of redundant information in an image, the k-means clustering method is used to segment all pixels with close distances and similar colours into many superpixel regions. e average colour value of all pixels in the CIELAB space of each superpixel is recorded as the colour of the superpixel, which is marked as C i : Suppose that an image is segmented into n superpixels as R 1 , R 2 , . . . , R n . For any two given superpixelsR i and R j , usually there will be more than one path connecting them. e length of each path is the sum of the distances of every two adjacent superpixels. It calculates the shortest path from R i to R j ; for example, path � R i , R l+1 , R l+2 , . . . , R l+k , R j , and its length is recorded as LM ij : 3.1.2. Image Region Segmentation. Since there are a lot of contour lines in the image, the different distribution of contour lines forms inflection points, intersection points, and other feature points, which are uniformly called corner points. After all corner points are recognized, a minimum polygon S containing them can be constructed. Superpixels located inside the polygon S are given larger weights, while those outside the polygon S are given smaller weights.

Calculation of the Saliency Score of Each Superpixel.
For a given superpixel R i in the image, the saliency score is calculated as follows.
Step 1. Calculate the scaling factor N i of R i as shown in the following equation: where LM ij is the length of the shortest path from superpixel R i to R j .
Step 2. e initial score of superpixel R i is calculated as where and d i represents the sum of the squares of the distances between R i and all other superpixels; namely, Step 3. e final visual saliency score of superpixel is calculated as shown in the following equation: e saliency scores of all superpixels are obtained by equation (7), and then the scores of all pixels also can be obtained. All scores are normalized to the range of [0, 255] to obtain the salient area of the original image. For example, for the host image as shown in Figure 1(a), the corresponding salient areas are extracted as shown in Figure 1(b). It is further binarized to get the final salient areas as shown in Figure 1(c), in which white regions are the visually salient areas, and the black regions are the visually nonsalient areas.

Adaptive Selection of Embedding Region.
Next, the nonsalient regions of the host image are evenly segmented into subblocks B 1 , B 2 , . . . , B n according to the size of the visible watermark image. Based on the texture complexity and the gray distribution features of each image block, the relatively smooth image block is selected as the watermark embedding region.

Calculation of Image Texture Complexity.
It can be concluded that the edge density of the image is an important factor affecting the texture complexity of the image. erefore, it determines the texture complexity by calculating the density of the image boundary. e specific calculation steps are as follows: Step 1: obtain the gradient feature map of the host image.
To eliminate the interference of noise, H G is obtained by Gaussian filtering for the host image H; then the gradient feature graph G is obtained by Laplace transform: Step 2: obtain image boundary features. Use the Otsu method to binarize G to obtain an image G B with only two gray levels of 0 and 255, and then perform a morphological closing operation on G B to obtain M: Here, C(·) is to perform morphological closing operations on the image. At this time, a large number of boundary features of the host image are saved in M.
e boundary density of the subblock B i in the host image H, that is, the texture complexity ρ i , is calculated as follows: where L i is the number of pixels located on the boundary extracted from B i in M, S i is the size of B i , and 0 < τ < 1. In practical applications, it sets τ � 0.01.

e Distribution Features of Image Gray
Value. e visible watermark embedding can be performed by modifying the pixel value of the host image in spatial domain. Generally speaking, in dark tone areas with a gray value interval of [0, 127], the pixel value should be appropriately increased. In the bright tone area with a gray value interval of [128, 255], the pixel value needs to be appropriately reduced. e visibility of the visible watermark image can be ensured when the magnitude of the modification is large. To ensure the overall visual effect of the visible watermark, our proposed method tries to select the pixel values of the middle tone interval range [127 − σ, 128 + σ], where σ can be set to 64. Next, we will discuss the calculation of the gray value distribution feature of the subblock R i . e average gray level α i of all pixels in the subblock R i is calculated as follows: where P j is the pixel value of the host image at point j and the gray value distribution feature c i of the subblock R i is calculated as follows:

Smooth Area Selection.
After obtaining the boundary density ρ i and the gray value distribution feature c i of each subblock R i using equations (10) and (12), the feature value μ i of R i can be obtained by the following equation: e feature value μ i of each subblock R i is sequentially calculated, and all feature values μ i are sorted in ascending order. e image area R j corresponding to the smallest feature value μ i is the final visible watermark embedding area. In this paper, it reduces ρ i and increases c i to achieve the purpose of reducing μ i . Among them, when the ρ i value is smaller, the visible watermark can successfully avoid the areas with high image texture complexity. When the c i value is larger, the selected visible watermark embedding area will be far away from the high-tone and low-tone areas in the host image, thus making the watermarked image achieve a relatively ideal visual effect.

Adaptively Visible Watermark Embedding.
In this paper, the visible watermark image is embedded in the host image as follows: where I(x, y) is the pixel value of the host image, I W (x, y) is the watermarked pixel value, and c is the adaptive watermark strength. To adaptively calculate the watermark strength c from the image features of the neighbourhood area around the watermarked pixel, each pixel in the visible watermark image is embedded into the corresponding image block B size of 2 × 2 pixels in the host image; that is, each pixel in B is adjusted according to the same watermark intensity c in equation (14). To ensure the visibility of the visible watermark image, JND model is used to adjust the gray value of each pixel after watermark embedding. e general watermark embedding diagram is shown in Figure 2.

Adaptive Calculation of Watermark
Strength. e embedding strength of the visible watermarking adaptively changes with the image complexity of the embedding region. As for the pixel W p in the visible watermark image W, the corresponding watermark embedding area B p in the host image consists of four pixels P 1 , P 2 , P 3 , P 4 . en the specific calculation process of watermark strength is as follows.
Step 1. Calculate image texture complexity. B p and 8 embedding regions in its neighborhood constitute a region S P with the size of 6 × 6 pixels. rough equation (10), the boundary density S in ρ s of S p can be obtained; then the texture complexity α p of region S p can be expressed by ρ s ; that is, Step 2. Calculate the change amplitude of the pixel value in the embedding region. e intensity change of pixel value in the embedding area of the visible watermark image is β p , which can be measured by the gradient of pixel values in the region. In the embedding region B p , the four pixels are sorted in ascending order according to the gray value, and the average gray value a l of two smaller pixels and the average gray value a 2 of two larger pixels are calculated, respectively. e calculation of β p is as follows: where, τ > 0. As shown in equation (16), in the region where the pixel value changes smoothly, the value of a 1 is close to a 2 , and tan − 1 (a b /a 1 ) is approximately equal to π/4, which corresponds to a larger β p . Similarly, when the pixel value in the region changes sharply, the difference between a 1 and a 2 is large, and the denominator of equation (16) will increase accordingly, so β p will decrease accordingly.
Step 3. Calculate the final watermark embedding strength.
e embedding strength of visible watermarking is calculated as It can be seen from equation (17) that, in the flat area of the host image, if α p is small and β p is large, the watermark embedding strength c is relatively small. On the contrary, in the region of the host image with complex texture, α p is large and β p is small, and the watermark embedding strength c is relatively large. All c values in the embedding areas are calculated normalized to the interval of [n 1 , n 2 ].

Visual Watermark Image Embedding.
For the host image I with RGB colour space, the specific steps of visible watermark image embedding are as follows: Step 1. Firstly, the host image is transformed from RGB colour space to YUV colour space, and the visible watermark is embedded in the luminance component Y.
Step 2. For the Y component, the JND masking matrix J is calculated.
Step 3. For each pixel W p in W, the embedding process in the region B p of the host image is as follows: (1) When W p � 0, all pixel values of B p remain unchanged.
Step 4. Adjust the watermarked pixel value.
For any P i in B p , the change range D i of the pixel values before and after watermark embedding is calculated: where ω ≥ 1, is a fixed constant and J(P i ) is the value of the masking matrix J at the point P i , it is shown that the variable of the watermarked pixel value obtained by equation (14) is too small, and the pixel value after the watermark embedding is recalculated in the following way: (2) If D i ≥ ω × J(P i ), then I W P i � I W ′ P i . So far, the embedding process of visible watermarking is finished.
For example, the Logo image in Figure 3(a) is embedded into host image in Figure 1(a) to get the watermarked image shown in Figure 3

Comparison of the Embedding Region Selection.
In [33], the visual saliency matrix of the host image is calculated by using the ITTI visual model, and the region with the lowest visual saliency is selected as the embedding region for visible watermarking. In [29], it calculates the average gray value of each block image and selects the region that meets the following conditions as the embedding area: (1) e average gray value of the image block is greatly different from the middle tone 127. (2) ere are as many pixels as possible in the area whose gray values are not equal to the average gray value. In this experiment, there are 24 images selected as the host images from Kodak image set. First of all, the sizes of all host images are scaled to 800 * 800 pixels, and the size of the binary visible watermark image is scaled to 120 * 120 pixels.  Figure 2: e diagram of visible watermark embedding process.

Security and Communication Networks
Accordingly, the size of watermarked region is set as 240 * 240 pixels. e adaptive selection effects of the visible watermark embedding region are given by using the methods in [29,33] and our proposed method, respectively, just as shown in Figure 4. In Figure 4, as for the image kodim14, both the method in [33] and our proposed method bypass the key objects, such as boat and the people on the boat. However, the method in [29] does not avoid the boat. But the texture complexity of the region in [33] is higher and the gray level is near the middle tone, which disturbs the visual vision inevitably. For the image kodim17, in [33], the regions occupied by statue face and the ball are detected as the salient areas, and the watermarked area overlaps the region occupied by the clothes. e method in [29] and our proposed method bypass the statue, and the location of the region selected by our method is relatively more ideal. As for the image kodim22, the most significant area should be the area occupied by the house. In [29], the watermarked region conflicts with the area of the house, while the method in [33] and our method both avoided the house successfully. In contrast, the selected region by our proposed method is more suitable, in which the background is relatively flat, and the visual effect of the watermarked image is more natural. On the contrary, the method in [33] selects the area occupied by grassland for visible watermarking, and the texture of the background image reflects the visibility of the visible watermark image.
In addition, as mentioned before, whether it is based on ICA or traditional image inpainting method, the location information of the embedding region is needed to remove the visible watermark image from watermarked image. In the proposed method, the embedding region of the visible watermarking is adaptively selected, which can resist the watermark removal attack to a certain extent, especially for the batch removal of the visible watermark image. erefore, the proposed method is robust to the visual watermark removal attack.
To sum up, in [33], the ITTI visual model is used to detect the region of interest in the host image, but the gray distribution and texture complexity of the host image are not considered while selecting the watermark embedding region. erefore, when the texture details of the host image are complex or the significant areas are widely distributed, the accuracy of the region of interest detected by the model is low. In [29], it only uses the features of gray distribution in the host image and does not consider the texture complexity of the host image, in which the selected area usually obscures the important objects. On the basis of salient region detection, combining with the gray distribution and texture complexity of the background image, our proposed method can adaptively select the embedding area for the visible watermarking, which can effectively overcome the defects of the existing methods.

Comparison of Visible Watermark Embedding.
In [29], it uses the visual effect factor (VEF) of HVS to adaptively modify the pixel value of the host image to produce a better fusion effect between the visible watermark image and the host image. In [35], the visible watermark is embedded by using the method of dynamic pixel value mapping (DPVM). e visual effect of the visible watermark embedded by our proposed method is compared with the methods in [29,35]. e subjective effects of visible watermarked images are shown in Figure 5, and the objective visual effect index parameters are as follows.

Peak Signal-to-Noise Ratio (PSNR).
PSNR is used to measure the distortion or noise level of an image. It is often used to objectively evaluate the degradation degree of an image. e greater the PSNR value between two images, the lower the degradation degree of the image, that is, the higher the image quality. Given the original image H with the size of M × N pixels and the watermarked image H ′ , the peak signal-to-noise ratio is defined as where MSE is the mean square error between images, which is defined as  (Figure 1(a)).
where H ′ (x, y) is the gray value of point (x, y) in the watermarked image, H(x, y) is the gray value of point (x, y) in the host image, JNDH(x, y) is the value of JND matrix of the host image at (x, y), and the calculation method of δ is calculated as follows: e abruptness of the entire image is represented by the average of the MSPE values of all pixels. It can be seen from equation (27) that the larger the MSPE value of the watermarked image is, the more obvious the abruptness of the embedded visible watermark image is, and the worse the visual effect of the watermarked image is. e visual quality evaluation effect of the watermarked image shown in Figure 5 is given in Table 1.
It can be seen from Figure 5 and Table 1 that the visibility and transparency of the visible watermark image are contradictory. In [35], it calculates a visible watermarking strength by using the overall features of the watermarked region but ignores the feature information in the local domain. e obtained watermark image has the strongest visibility, as shown in column (b) of Figure 5, but the obtained watermark image has the worst abruptness and the maximum MSPE value. Due to the strong visibility, the watermark image has the lowest similarity with the original host image, and the corresponding SSIM value is the minimum. On the basis of ensuring the visibility of the visible watermark image, to further increase the transparency of the watermarked image, the texture and colour features around each pixel to be modified are considered in both the method in [29] and our proposed method. However, compared with [29], the proposed method is based on the continuous gray gradient of the host image, and the JND value is introduced to ensure the transparency of the watermark image. In addition, compared with the original host image, the watermarked image generated by our proposed method has the least distortion, so it has the highest similarity.

Conclusion
To improve the robustness, visibility, and transparency of visible watermarking technology, this paper proposes an adaptive embedding method for visible watermarking. Firstly, the salient region of the host image is detected based on superpixel detection. On the one hand, the visible watermark image avoids the key objects in the host image and does not damage the value of the host image itself. On the other hand, the embedding region of the visible watermark image will change with content of the host image which increases the difficulty of malicious removal of visible watermark and especially can effectively resist the attack of batch automatic removal. en, in the nonsalient area of the host image, a relatively low complexity flat area is selected as the embedding region, because the details in the host image with high texture complexity will affect the visibility of the visible watermark image. Finally, considering the JND  [29] In [35] Ours In [29] In [35] Ours In [29] In [ coefficient, gray distribution, and texture complexity of the embedding region, the watermark embedding strength is calculated adaptively, and the ideal transparency of the visible watermark image is obtained. In a word, the proposed method achieves a good balance in robustness, visibility, and transparency. However, how to further improve the security of visible watermarking algorithm is worthy of further research.
Data Availability e software code and data used to support the findings of this study are available from the corresponding author upon request.