Color Enhancement in Art Works Based on Image Processing Technology of Contourlet Domain

With the rapid development of network technology and image processing technology, people can obtain data in a number of ways. Although the emergence of digital works has brought convenience to people’s lives, it has also broughtmany hidden dangers in the eld of copyrights, for example, illegal copying, dissemination, and disclosure of digital art works. Sensitive information contained in digital art works can easily be stolen or illegally tampered with through the Internet, thereby destroying the legitimate rights and interests of copyright owners.erefore, the emergence of digital watermarks provides convenience and practicality for copyright protection. is paper proposes an image digital watermarking algorithm based on SVD and contourlet domain changes. Contourlet transforms the carrier image, divides the direction subbands with high energy values into blocks, performs SVD on each sub-block, and adds the watermark according to the parity quantization rule. e color information-based image enhancement discussed in this article uses the characteristics of multichannel color images to meet application requirements. It mainly involves three aspects: color image enhancement technology that dyes grayscale images into color images, image enhancement technology that uses the colorization algorithm to improve the accuracy of the reconstructed chromaticity image in the process of super-resolution reconstruction, and grayscale image color enhancement technology that preserves the characteristics of the color image during the conversion of the artwork image to the grayscale image. is paper studies the image processing technology based on the contourlet domain and applies it to the research of color enhancement of art works to promote the further development of art works.


Introduction
A digital image is usually a digital representation of a twodimensional image, which is generally expressed as a binary system. According to whether the image can be of xed resolution, it can be of vector or raster type. Raster images have a limited set of numerical values called pixels [1]. A digital image includes a xed number of pixels in rows and columns. A pixel is the smallest single element in the image and represents a xed value of the brightness of a speci c color. Normally, pixels are stored in the computer's memory as raster images or raster maps, which are small integers in a two-dimensional array [2]. After the image is converted into binary digital information, there is a huge amount of information, which also puts forward huge requirements for the transmission source, transmission medium, transmission means, and storage medium, which has also become a bottleneck problem in the digital communication eld [3]. Contourlet domain conversion can separate multiscale analysis and direction analysis at the same time. Images are the most important information carrier in human social activities. According to the statistical results, about 75% of the information obtained by a person comes from visual images [4].
In this article, we have conducted a detailed study on the contourlet domain transformation, decomposing the LP transform and decomposing the image to capture multiple points, while the directional lter bank decomposes the Laplacian pyramid at the same point to synthesize the coe cients [5]. Among them, image enhancement technology is an important eld of image processing technology research, which is based on subjective needs. Or according to di erent application requirements, some image information is highlighted by processing existing images to obtain useful images more suitable for specific applications. Or through image processing methods, irrelevant or unnecessary information is weakened or deleted, so that the original image is more suitable for human or machine analysis and processing [6]. e enhanced image mainly pursues subjective and objective effects and does not necessarily need to be close to the original image. Traditional image enhancement is mostly performed on art works, such as image smoothing and denoising, contrast stretching, edge sharpening, histogram alignment, and homomorphic filtering. Among them, multichannel color images are compared with singlechannel grayscale images [7]. With richer information, based on the processing of color information, it will provide new research content and methods for traditional image color enhancement that only processes grayscale images.
is article introduces the research background and importance of image enhancement based on color information [8].

Related Work
is paper introduces the details and color enhancement algorithm in image defogging and restoration, as well as the background and significance of its research. Based on whether to use physical model in image defogging algorithm for image restoration, this paper discusses the research status of the two algorithms [9]. Finally, the content composition of this article is introduced. e basic theories of dehazing algorithms based on image enhancement and physical models are outlined. First, we introduce the atmospheric model, dark channel prior theory, and dehazing algorithms. en, we introduce an image quality evaluation method and give the theoretical basis for comparing algorithm analysis and evaluation indicators in the subsequent experimental part [10]. An improved guided filtering algorithm based on the gradient domain is proposed to increase the transmittance in the atmospheric scattering model and improve the details of the image in the process of dehazing. e effectiveness of the algorithm is proved through experiments with traditional guided filtering and weighted guided filtering [11]. en, a detailed enhancement operator is proposed, the last step in the dehazing process is processing for detail enhancement, and the influence of different parameter conditions on the extraction of detail information is analyzed [12]. e image defogging color restoration algorithm is introduced, introducing three color factors into the pyramid fusion technology and applying the improved pyramid fusion technology to image defogging processing [13].
According to different fused image information, parameter weight analysis and detailed information are added. e more and the richer the image, the greater the weight, the foggier images can be merged, and a fog-free image with richer color information and higher definition can be obtained [14]. e literature summarizes the dehazing image details and color enhancement algorithms, improves the optimized transmittance details of the guided filter, introduces the color restoration of the fog image based on color factor and parameter fusion, and finally adds the detail enhancement operator to realize the foggy image [15]. e details and color enhancement algorithms in the restoration process are analyzed from both subjective and objective aspects through multiple sets of experiments [16].

Contourlet Domain and Image
Processing Technology 3.1. Contourlet Transformation eory. e contourlet base cover interval is a rectangular structure.
is rectangular structure changes with the aspect ratio when the scale changes. It has directionality and anisotropy. e contourlet transformation is more conducive to the sparse expression of the curve. Its basic idea is similar to wavelet transform. e original image passes through a two-stage filter. e first stage image signal is decomposed into multiple subbands from the frequency band passing direction through a pyramid filter structure (LP) filter. e second stage directional filter (DFB) synthesizes singular points distributed in the same direction through contour wave coefficients by capturing the direction information of the image, as shown in Figure 1.
e Laplacian pyramid is very similar to the Gaussian pyramid. e difference is that it saves the difference image between the blurred versions of each level. At each layer of the Laplacian pyramid, a downsampled low-pass image is decomposed. e difference between the original image and the predicted image becomes the difference image (bandpass image). H is the analysis filter, G is the synthesis filter, M is the sampling matrix, and Mn is the integer matrix. e sampling matrix in the multidimensional filter is used to represent the sampling. is process is implemented in the downsampling of the low-pass signal, for example, by M. e dual-frame operator realizes the optimal linear reconstruction, and its advantage is that when the noise exists, the optimal linear reconstruction has a significant improvement and breakthrough in performance on the existing basis. e filter bank is an important analysis method in the field of digital signal processing. In signal processing, the filter bank is an array of bandpass filters that divides the input signal into multiple components, and each component carries a single-frequency subband of the original signal. An application of the filter group is a graphic equalizer, which decomposes multiple components to varying degrees and recombines them to form an improved version of the original signal. e process of decomposing the filter bank is called analysis. e number of filters in the output filter bank means that a subband signal includes the same number of subbands. e reconstruction process is called synthesis, which means that it is generated from the filtering process. e reconstruction process is called synthesis, which means that it is a signal generated by the filtering process to complete signal reconstruction.

Image Coding Processing Technology.
e subband energy value obtained in step 1 is calculated by formula (1). e larger the energy value, the richer the texture. erefore, the sub-block with the largest energy value is selected to add the watermark. e calculation formula is as follows: en, we use the above formula to quantify the first eigenvalue S(1, 1) of S. e quantitative formula is where round is the rounding function and δ is the quantized value which is calculated as follows: According to the relationship between the watermark information, the reconstructed S component is Decompose the sub-block A ij as SVD and then use the formula to quantify the first eigenvalue (1, 1)S′ of S′: According to whether λ ij can be divisible by 2, the watermark is obtained, and the specific formula is as follows:

Color Detail Enhancement Algorithm.
In the process of image defogging restoration, detailed information is easy to lose, especially the edge area, so the maintenance of edge information has a great influence on the defogging effect. e traditional guide filter has a weak edge information retention ability, and based on this, the introduction of weights is used to improve its edge information retention ability. To strengthen this ability, three aspects need to be considered: First, the improved guided filter is very sensitive to edge judgment, that is, to accurately find the edge of the image. Second, the detail enhancement edge part is given different weighted enhancement coefficients according to the content edge or color edge to avoid losing information in the process of defogging. ird, in the nonedge part, the noise is smoothed, and the noise elimination has an impact on the defogging restoration. First of all, in order to achieve the first part, the edge area of the image needs to be obtained. In digital image processing, the edge of the image can be obtained by gradient. e gradient can be represented by a vector, reflecting the rate and direction of the pixel point, and the gradient of the image is right derivation of the image, namely, Analyzing the numerical information of the image shows that if a point is in the edge part, its gradient value will be larger; otherwise, the gradient value will be smaller. Considering this part, it is necessary to define a threshold of 1 to filter and distinguish the edge area or the nonedge area. When G < t, it is considered that the pixel i is at a nonedge, and when G ≥ 1, it is considered that the pixel i is on the edge. e specific definition of the threshold is where DR is the dynamic range of the absolute value of the gradient of all pixels in the guide image and 0.15 is the experimental data. When the DR is set to a multiple of 0.15, the judgment of the threshold is closer to the real image. en, the formula for judging the edge area is Second, in order to determine the edge area of the image, we need to propose different weighted enhancement coefficients according to the edge area. When the image is located in the edge area, the edge information is enlarged for enhancement. When the image is not in the edge area, the image needs to be smoothed in order to suppress noise: e weight is brought into the guided filter, and the formula is rewritten as in which After calculation, the corresponding a k and b k become

Mobile Information Systems
e final output image is rough experiments, it can be seen that subjective evaluation means that, from a human perspective, the image processed by traditional guided filtering is too smooth and loses image information details; although weighted guided filtering is relatively improved, the colors are gorgeous and not suitable for people. e algorithm proposed in this paper maintains the image details while smoothing the image, and has a good sense. For example, the details of the central stamen are more consistent with the original photo, and the petal edges are more clear and distinct. is article not only looks at subjective evaluation but also focuses on objective evaluation. Table 1 shows its objective evaluation value.

Color Restoration Algorithm.
e original image G is set as the bottom layer of the pyramid. e low-pass filter w(m, n) uses this method to perform convolution calculation, and the transaction performs image downsampling to obtain the upper layer of the gold word. Each level of image is obtained by performing low-pass filter convolution calculation on the previous layer and then downsampling every other row and column, as follows: e downsampled image is subtracted from the interpolated and enlarged image to obtain a pyramid of detailed information. e mathematical expression is as follows: e fusion image is designed to be processed with 2 or more images. All fused images need to be pyramid constructed. After completion, they are fused, that is, the information is superimposed. Finally, the enhanced detail layer is added to the fused pyramid. e enhanced image G j fused with different image color information is obtained, as follows: In the process of watermark embedding, the choice of quantization step size is very important. e larger the quantization step size, the better the robustness of the algorithm, but the invisibility of the algorithm will decrease. In the adaptive quantization step size watermarking algorithm, the value of the initial quantization step size is fixed. erefore, how to choose an appropriate initial quantization step size is a key technology to balance the robustness and invisibility of the embedding algorithm. In this paper, through multiple experiments on three images of Lena, Baboon, and Peppers, it is found that the robustness of the algorithm is low when the initial quantization step size is 25. When the initial quantization step size is 55, a slight change is observed in the carrier image. When the initial quantization step size is 55, the invisibility of the algorithm is already poor. is article compares the robustness and invisibility effects and selects the initial quantization step size in the interval 25-55 to determine the final quantization step size. Figure 2 shows the NC value relationship diagram corresponding to the watermarked image when it is compressed by JPEG when different initial quantization steps are selected.
As shown in Figure 3, the NC value increases as the initial quantization step size increases. From the perspective of the correspondence between the initial quantization step size and the PSNR, the larger the initial quantization step size, the smaller the corresponding PSNR value and the invisible watermark. e performance decreases with the increase of the quantization step in a certain interval. However, the NC value changes significantly in the 40-45 interval, so the value between 40 and 45 is used as the candidate value, which is used as the initial quantization step for attack experiments.

Chroma Image Super-Resolution.
e reason for the simple processing of chrominance images is that the human eye's tolerance for color distortion is higher than the tolerance for brightness information. When the application requirements are not high, this processing method can basically meet the requirements. However, when the image structure is more complex, the color information is rich, or the reconstruction multiple is large, and the chrominance image obtained by the interpolation algorithm by this method is prone to distortion such as edge blur and jagged, which affects the overall effect of color image super-resolution reconstruction. Because the structural properties of the luminance component and the chrominance component of an image are different, many super-resolution reconstruction techniques suitable for luminance images cannot be directly used for chrominance images. erefore, it is necessary to find a technology suitable for reconstructing high-resolution chrominance images. In order that the colorization algorithm of color diffusion can be well adapted to this demand, first of all, the high- resolution brightness image obtained by the grayscale image super-resolution reconstruction technology can be used as the grayscale image to be colorized. e color information is regarded as the points that have been dyed with the original color in the high-resolution image to be obtained, and the colors of other points can be obtained through the colorization algorithm based on color diffusion to complete the super-resolution reconstruction of the chromaticity image. Image super-resolution reconstruction technology simulates the image degradation process of the imaging system and reconstructs highquality images. e image degradation process is expressed as a matrix as follows: e IBP algorithm is an image super-resolution reconstruction algorithm. Its core idea is that the reconstructed ideal image undergoes the same degradation process as the original degradation model and the same image as the original low-resolution image should be produced. erefore, the reconstruction error e(I H ) can be minimized through an iterative process, which is defined as ere are many colorization algorithms based on color diffusion. is paper selects a colorization algorithm that can effectively use the edge information of the image to improve the edge distortion that cannot be solved by the interpolation algorithm, which has a faster calculation speed. e starting point of the algorithm is that, in the YUV color space, if the brightness Y of adjacent pixels is similar, the chromaticity U should also be similar. If the chromaticity value of the current point r is set to U(r), then it can be expressed by the weighted summation of the chromaticity values of the pixels in the pixel set N(r) in its neighborhood, and the error energy is as follows:

Mobile Information Systems
where w rs is the weighted value of each neighborhood pixel. Calculate the total error energy J(U) for all the unknown color points and minimize J(U): According to the starting point of the algorithm, the weighted value w rs is related to the brightness difference between the pixel and the neighborhood and satisfies that the smaller the brightness difference from the current point r, the greater the neighborhood pixel weight, and the sum of all weighted values is 1. e weighted value calculation formula is as follows: For a more intuitive comparison, Figure 4 shows a partial enlargement of the result image of each algorithm to compare the algorithms subjectively. List the PSNR and SSIM values between these local blocks and the chrominance components of the original high-resolution color image blocks for objective comparison. e local block enlargement of the "butterfly" image reconstruction result is shown in Figure 4. e partial block enlargement of the image reconstruction result is shown in Figure 5.
By comparing the numerical results listed in Tables 2 and 3, it can be seen that the proposed IBPCSR algorithm has obtained better subjective effects and higher PSNR and SSIM values, which shows that it is necessary to perform targeted processing on chrominance images. It can effectively reduce the color distortion of the reconstructed image and improve the display effect of the obtained high-resolution color image. In order to improve the efficiency and performance of the algorithm, based on the super-resolution reconstruction of chrominance image, this paper further proposes to use the high-resolution image as the guide image. Guided filtering has high computational efficiency and high performance of maintaining image edges to improve the quality of the reconstructed high-resolution chrominance image. e details are shown in Table 2.
e SSIM values for image component comparison are shown in Table 3.
Image guided filtering (GF) is an image edge-preserving filtering method. e processing process involves three images: input image p, filtered output image q, and guidance image G. e guided filter assumes that there is a locally linear relationship between G and q. In the window W r centered on the pixel r, the linear function is expressed as follows: where a r and b r are fixed values in a square window W r , which are related to the input image and guide image in the window. is local linear assumption can ensure that the output q s has similar features to the local G s of the guiding image, and it also shows that the output image has the same edge information as the guiding image. Because the luminance image has similar but more prominent edges to the chrominance image, the reconstructed high-resolution luminance image YH is used as the guide image to improve the edge performance of the chrominance image, and the initial high-resolution chrominance image U 0 H is used as the guide image. e input image is processed by guided filtering. e parameters a r and b r of the guidance filter can be directly calculated by the following formula after derivation: Because in actual processing, in order to ensure data smoothness and accuracy, overlapping windows are used, there will be multiple windows containing pixels s, and the calculated q s of different windows will be different, so the formula is updated to in which is paper presents the experimental results of three chromatic image super-resolution reconstruction algorithms (IBPCSR, GFCSR, and POCSCSR algorithms), bicubic interpolation algorithm, Liu algorithm, and Huang algorithm and conducts 4x super-resolution reconstruction experiments and low-resolution experiments e image is obtained by blurring and downsampling the original highresolution color image. e blurring filter uses a Gaussian low-pass filter with a window size of 5 × 5 and a variance of 2.
e grayscale image super-resolution reconstruction algorithm based on sparse representation is used to obtain the high-resolution luminance image YH as the luminance component of the results of all algorithms to be compared.
Compare the results of all algorithms with the original high-resolution chrominance image, calculate the average color difference △E of PSNR, SSIM, and CIEDE2000, and record them in Tables 4-6. e SSIM values of the algorithm results compared with the U and V components of the original high-resolution chrominance image are shown in Table 5.
From the numerical comparison results in Tables 4-6, it can be seen that the three methods proposed in this paper have obtained better results than bicubic interpolation, Liu's algorithm, and Huang's algorithm, especially the combination of guided filtering. e performance of the latter two algorithms (GFCSR and POCSCSR) is significantly improved compared to that of other algorithms. e results obtained by these two algorithms are very close under the measurement of PSNR, SSIM, and △E standards, and the pros and cons of the two cannot be distinguished by these three metrics.
Color image super-resolution reconstruction is a research hotspot in the field of image processing, and the performance of various algorithms is constantly improving. However, in addition to learning-based algorithms that consider the threechannel information of color images as a whole, the process of color image super-resolution reconstruction processing usually converts the image into a color space where luminance and chrominance information are separated, studies the luminance image, and interpolates only the chrominance image in order to obtain an enlarged image. e process is usually performed to convert the image into a color space in which the luminance and chrominance information is separated and study the luminance image, and in order to obtain an enlarged image, only the chrominance image is interpolated. is simple processing of chrominance information is likely to affect the final reconstruction. erefore, it is necessary to study the super-resolution reconstruction algorithm of image that meets the characteristics of chrominance image. e relationship between the color image super-resolution reconstruction framework and the colorization model based on color diffusion, the correspondence and difference between image brightness information and chroma information, and algorithm efficiency are analyzed, and three chroma values are progressively proposed.
First of all, according to the feature that the colorization algorithm based on color diffusion can improve the edge performance of chroma image super-resolution reconstruction, the IBPCSR algorithm based on optimized colorization and iterative backprojection is proposed. e initial value position is not adjusted, and the colorization algorithm incorporated into iterative backprojection, instead of the usual interpolation magnification for iterative solution, this algorithm can usually obtain better than bicubic interpolation, the Liu algorithm that also uses colorization for super-resolution reconstruction of     Mobile Information Systems chrominance images and obtains the overall high resolution based on learning e Huang algorithm of high-resolution color image plus iterative backprojection results, but the running time is too long. erefore, in order to maintain the high computational efficiency of the guided filtering algorithm for the edge of the image, the luminance image presentation can contain more edge and detailed information than the chrominance image. is paper further proposes to use the reconstructed luminance image as the guiding image. e guided filtering of the IBPCSR algorithm replaces the optimized colorization algorithm used in the iterative backprojection of the IBPCSR algorithm to perform the iterative processing of the GFCSR algorithm. is algorithm improves the image reconstruction effect and reduces the running time. However, due to the need to optimize the matrix equation for color processing, the running time of the GFCSR algorithm is much higher than that of the commonly used interpolation algorithm. In order to further improve the efficiency of the algorithm, considering that the kernel function of the steering filter with the luminance image as the guide image should meet the local constraints of the chromaticity value proposed by the optimized colorization algorithm, but the backprojection meets the overall constraints, according to the convex set projection framework, a POCSCSR algorithm is proposed, which analyzes the local neighborhood relationship between the navigation filter kernel function and the colorization algorithm and uses the highly computationally efficient navigation filter algorithm to complete the optimized colorization process with the GFCSR algorithm e same job. When measured by objective numerical values (PSNR, SSIM, △E), the results of the POCSCSR algorithm are very close to those of the GFCSR algorithm, but because its running time is greatly shortened, it is no different from the time of the interpolation algorithm and the iterative backprojection algorithm.

Conclusion
In this paper, the similarities and differences between wavelet transform and contourlet transform and their respective properties are studied. e application and implementation methods of contourlet transform in the fields of image watermarking, image fusion, and image compression are studied. Based on the principle of contourlet, the pyramid and direction are studied. For the structure and principle of the filter, a watermark encoder with geometric transformation and robust image compression is also proposed. Based on the theory of contour transformation and principal component analysis, the contour transformation coefficient covering the image direction subband is selected as the embedding position. e main component of the contour coefficient is used to embed the watermark, and the noise recognition function is used to adaptively adjust the watermark embedding strength. e experimental results show that, after various possible distortions, the watermark can be detected with high accuracy.
is paper also studies three aspects of art image enhancement based on color information, including colorization of grayscale images that enhances the display effect of grayscale images by dyeing grayscale images. In the framework of color image super-resolution reconstruction, it replaces the usual color matching. e processing method of chromaticity image interpolation and enlargement is used to improve the chromaticity image super-resolution reconstruction of the reconstructed chromaticity image quality. When the color image is displayed and researched in grayscale, the color image is grayscaled to enhance the contrast and perceptual consistency.

Data Availability
e data used to support the findings of this study can be obtained from the author upon request.

Conflicts of Interest
e author declares no conflicts of interest.
Acknowledgments is work was supported by the "U-G-S" fund of art normal students from the perspective of "normal professional