Research on Visual Image Texture Rendering for Artistic Aided Design

&e rendering effect of known visual image texture is poor and the output image is not always clear. To solve this problem, this paper proposes a visual image rendering based on scene visual understanding algorithm. In this approach, the color segmentation of known visual scene is carried out according to a predefined threshold, and the segmented image is processed by morphology. For this purpose, the extraction rules are formulated to screen the candidate regions. &e color image is fused and filtered in the neighborhood, the pixels of the image are extracted, and the 2D texture recognition is realized by multilevel fusion and visual feature reconstruction. Using compact sampling to extract more target features, feature points are matched, the coordinate system of known image information are integrated into a unified coordinate system, and design images are generated to complete artaided design. Simulation results show that the proposed method is more accurate than the original method for extracting the information of known images, which helps to solve the problem of clearly visible output images and improves the overall design effect.


Introduction
Software design and hand-drawn design are usually used in art design. With the rapid development of multimedia technology, art design is more and more inclined to be combined with computer technology, resulting in various auxiliary design tools and software design functions [1]. Various software design tools can improve the quality of work by reducing the cost of manual design. At the same time, designing samples using 2D and 3D software design models enrich the creativity of visual effect [2] and can better represent the designer's design concept and innovative ideas, change the original form of artistic design, improve the working mode of artistic design, and bring earth-shaking changes to traditional artistic design.
In computer science, the scene visual understanding is one of the most widely used technologies in the field of art design. Visual comprehension of scene allows the use of computer to replace human eyes and brain to perceive, recognize, and understand 3D scenes and objects in the real world [3,4]. It is used to analyze the complex distribution of objects in the scene's image by combining with natural language processing for accurately describing the information obtained in a reasonable manner. e main objective of visual comprehension is to allow the designers to extract the scene information. Applying the visual comprehension algorithm to the visual scene of artistic aided design can help the designer to solve the problem when the output image is not clear because of the imprecise information.
With the development of image processing technology, 2D texture recognition of color image is carried out by using image processing technique from computer vision. Moreover, 2D texture feature extraction and analysis method of color images is combined to analyze the texture feature of color images, improve the image quality and detection ability of color images, study the 2D texture recognition method of color images, and improve the accurate analysis and 3D feature resolution ability of color multitexture image. In [5], the authors used a combination of macro-and local aspects to obtain multiscale data information for building an image information model. e authors in [6] put forward a method of image segmentation based on the induction and application of multifeature information in remote sensing images. is method combines the features of spectrum, texture, and shape, respectively. In [7], the authors put forward a method of remote sensing image segmentation by combining spectral and texture features, which can improve the segmentation efficiency and accuracy of different objects.
Edge sharpening feature decomposition, scale decomposition, and multimode feature reconstruction methods are used to realize 2D texture recognition of color images [8]. However, the traditional methods for 2D texture recognition face numerous challenges such as low precision and bad selfadaptive ability. Hence, in this study, 2D texture rendering based on computer vision is proposed to detect the saliency areas of the 2D texture image. e rest of this paper is organized as follows. In Section 2, graphics rendering is discussed which is the building block of 2D texture rendering. In Section 3, color multitexture image acquisition and regional fusion filtering is discussed.
e experimental results and analysis are provided in Section 4. Finally, this paper is concluded and future research directions are provided in Section 5.

Graphics Rendering
Rendering pipeline is a conceptual model in computer graphics that describes the steps a graphics system needs to perform for rendering a 3D scene onto a 2D screen [9]. For this purpose, we first discuss graphical rendering process in Section 2.1 followed by vertex processing and 3D observation in Section 2.2.

Graphical Rendering Process.
Commonly referred to as a rendering pipeline, it is a series of data processing for application's data into the final rendering of an image [10]. e rendering process is shown in Figure 1. First, the vertex and attribute required for the geometry is set on the client side of the application, and then, the data are entered into a series of shader stages for processing. e output of one element is used as an input for the next stage/element, resulting in an image that can be rendered to a 2D screen. Next, the rendering pipeline can be divided into several main stages, namely, vertex processing, rasterization, slice processing, and output integration operation.
During the vertex processing phase, vertices and primitives, such as conversion operations, stored in the buffer are processed. In the rasterization phase, the updated pixels are passed to the rasterized unit upon clipping [11,12] by converting each pixel into a set of slices. Here, the slice is defined as a set of data, i.e., pixels that can not only be placed in the frame cache but also can be culled out and the pixels in the color buffer defined as a memory space that stores the pixels displayed on the screen. During chip processing, the chip testing is mainly carried out, and then, the color value of the chip is determined by various operations of the chip shader. During the output merge phase, the pixels in the slice and color buffers are compared or merged, and the color values of the pixels are updated [13].

Verx Processing and 3D
Observation. Vertex processing and 3D observation perform various 3D geometric transformation operations on each input vertex stored in the buffer. e vertex processing stage is programmable. Based on the vertex processing transformation operation, 3D objects can be transformed from object space to clipping space. e transformation pipeline flow is shown in Figure 2.
Each object has a local coordinate system, or it can be assumed that each object is defined in its own object space. Also, multiple objects can be integrated into a single world space provided that the coordinate transformation takes the form x′ � a xx x + a xy y + a xz z + b x , y ′ � a yx x + a yy y + a yz z + b y , z ′ � a zx x + a zy y + a zz z + b z .
(1) e coordinates x ′ , y ′ , and z ′ are derived from the linear transformations of the original coordinates x, y, and z, which are called affine transformation. Translation, rotation, scaling, reflection, and tangent are special cases of affine transformations. Any affine transformation can always be expressed as a combination of these five transformations [14].
In the 3D homogeneous coordinate representation, the 3D translation of the coordinate position can be expressed in a matrix form using In the 3D scene environment, the model object can be transformed by translating its vertex coordinates. e three-dimensional rotation operation requires defining the corresponding rotation axis. First, the three-dimensional zaxis rotation needs to be obtained using equation (3). Next, the secondary coordinate is obtained using equation (4):  Scientific Programming x ′ where θ is the angle of rotation. e equation for rotating around the other two axes can be replaced by the coordinate parameters x, y, and z in equation (3): Using equation (5), the transformation equation for rotation around the x and y axes can be obtained as follows: ree-dimensional scaling can be represented by the following matrix: Among them, the scaling parameters t x , t y , and t z are arbitrary positive values that are prespecified. e display of the scaling transformation relative to the origin is expressed as When the object is modeled, it has its own local coordinate system and belongs to its own object space. In the rendering pipeline, the first task is to integrate the model objects of the independent object space into the world space, i.e., the world coordinate system. e world space can be regarded as the coordinate system of the entire virtual scene. e integration process is to apply model conversion, i.e., a model conversion matrix (Mmod) is obtained by multiplying the matrices of the above series of affine transformations, and the position coordinates of the model object in the object space are multiplied by Mmod to obtain the model object in the position coordinates of world space.

Color Multitexture Image Acquisition and Regional Fusion Filtering
In this section, first, we discuss the color multitexture image acquisition for rendering followed by plane projection of the area to be mapped. Finally, we discuss the procedure to calculate the coordinate of texture.

Color Multitexture Image Acquisition.
To realize the twodimensional texture recognition of color images based on computer vision, first, we build a color multitexture image acquisition model [15], use the local window feature detection method to extract the contour feature points Q and P of the color multitexture image, and combine the correlation. According to the fusion rule [16], the maximum value pixel A of the two-dimensional edge pixel feature components of the color multitexture image is  Scientific Programming Using the local information entropy fusion model for color multitexture image collection, extract the contour points of the color multitexture image, perform local information entropy fusion processing on the color multitexture image, extract the active contour model of the color multitexture image, and combine the color multitexture image e regional features of the active contour are matched with edge pixel features, and the local information entropy rect(t) is extracted, and the output of the pixel feature quantity collected by the color multitexture image is reflected: In equation (10), the pixel feature quantity |t| ≤ 1 and K are the number of pixels and j represents the singular point of the boundary of the color multitexture image. Assume that the position information associated distribution length of the color multitexture image is L � x max −x min and the width is W � y max −y min and H � z max −z min . Set the number of super-pixels to obtain the one-dimensional histogram distribution of the color multitexture image, determine the number of super-pixels K, and combine the scattering model to obtain the 2D texture feature of the color multitexture image. Splines' biorthogonal wavelet transform method is used to obtain the texture high frequency component, according to the USV decomposition result, which realizes the feature decomposition and 2D texture recognition of the color multitexture image. Knowing that the coordinate of any point in the area to be mapped is Q i (x i , y i , z i ), the coordinate Q i ′ (x i ′ , y i ′ , z i ′ ) of its projection point on the reference plane T needs to be obtained. According to the simultaneous equations,

Plane Projection of the Area to Be
e value of k can be obtained using us, the projected coordinate Q can be obtained in this fashion, i.e., Q i ′ (x i ′ , y i ′ , z i ′ ).

Texture Coordinate Calculation.
After projecting the vertices in the area to be mapped onto the reference plane T, a coplanar three-dimensional point set is obtained. According to this coplanar point set, a two-dimensional coordinate system S-T can be established, as shown in Figure 3. e origin of the S-Tcoordinate system is the first point P of the three vertices in the reference plane. Take the side PM � M−P as the horizontal axis, the length of the horizontal axis is |PM|, and the length of the longitudinal axis is |PN ′ |: where α is the angle between the vector PN and the ordinate. Given that the projection coordinate of the point , the vector |PQ ′ | can be calculated. Assuming that the angle between the vector |PQ ′ | and the transverse coordinate positive vector S of the texture coordinate X u is θ, the coordinates of Q i ′ (x i ′ , y i ′ , z i ′ ) in the twodimensional coordinate system can be obtained as Since, the angle is in the range [0, π], the sinusoidal value will be negative, so the sinusoidal value must be absolute to get the correct coordinates.
Using the above calculation, every projection point in the plane can get the coordinate points in the ST coordinate system. e texture coordinate system u-v is located in the range of [0, 1]; hence, it is necessary to normalize the coordinate points. If S max and T max are the maximum values of S Q i and T Q i , respectively, the final texture coordinates of the fixed points of the model obtained by proportional transformation are shown using e texture coordinates' system schematic is shown in Figure 4.
As shown in Figure 4, the user needs to change the mapping position, i.e., change the point P, which controls the origin of the texture mapping. It corresponds to the coordinate origin in the texture space, and the user needs to change the mapping direction, i.e., change the point M and point N, which control the direction of the texture mapping, respectively. In this figure, vectors PM and PN correspond to the u-and v-axis in the texture space, and the user needs to change the mapping size, namely, change the point M and point N, which control the size of the texture mapping.

Comparison of Changes to the Parameters.
ree parameters, P, M, and N, are used to control the effect of local texture mapping in determining the coordinates of vertices. Here, P is the origin of texture mapping and corresponds to the origin of texture image in the 2D coordinate system. It controls the position of local texture mapping. e vector PM ⇀ (S-axis) of point M and point P controls the direction of the u-axis of the texture image, which corresponds to the xaxis in a two-dimensional coordinate system. e change in the direction of PM ⇀ also affects the axial direction of S-axis, according to its determination that results in the rotation of the texture map. By changing the size of vector PM ⇀ , the stretching and shrinking of texture can be achieved. Similarly, the vector PN ⇀ (T-axis) of point N and point P controls the direction of the texture image axis, and the stretching and shrinking effects in the longitudinal direction can be achieved by changing its size. If the texture image is no longer required to be rotated, one can fix the S-axis in the positive direction so that the mapping direction does not change.

Experimental Analysis
is paper chooses Windows 10 as the experimental device, the model is CubiB171 N 8GL009BCN BN5000, the CPU memory is 6 GB, and the experimental platform is MAT-LAB9.0, and this paper takes Figure 5(a) as the experimental object, takes the visual communication effect as the foundation, and uses the designed method to render and analyze the system performance.
In order to verify the performance of this system, the images rendered in this system are compared with the images rendered in Linux graphics rendering system [6] and fluid cloud simulation rendering system [7]. e comparison results are shown in Figure 5.
As can be seen from Figure 5 and 5(b) is a picture rendered through the methodology proposed by this paper. Figure 5(c) is an image rendered by the Linux graphics rendering system [6] which has color differences, many color stripes, and serious distortion. Figure 5(d) is an image rendered through the fluid cloud simulation rendering system, on which many noise spots appear, resulting in blurred image rendering, which cannot be accurately displayed, resulting in a part of the chromatic aberration and a lower effect than that of the Linux graphics rendering system. Figure 5(b) is the poster image after the system rendering. It can be seen from the image that, after the system rendering, the poster image is clear, there is no noise interference, the color difference is rectified by the filter, and the color is more real. Compared with the other two kinds of rendering systems, this system has better rendering effect on the poster image and has excellent rendering effect.
When the system is rendering, it will be affected by factors, such as operation wait time. Compared with the other two systems, the result is shown in Table 1.
As can be seen from Table 1, the waiting time of all rendering operations in this system is not more than 0.5 s, followed by fluid cloud simulation rendering system, the maximum waiting time is 0.90 s, the system with the longest waiting time is the Linux graphics rendering system, and the maximum waiting time is 1.12 s. e average waiting time of the system is 0.18 s, which is better than that of the Linux graphics rendering system and the fluid cloud simulation rendering system. It proves that the system has the best performance, and the poster image is better. After image processing, the pixel change is affected by image size change. Comparing the pixel change of the system and other two systems in different image sizes, the result is shown in Figure 6. Figure 6 shows that no matter how the size of the poster image changes, the poster image rendered by this system keeps a stable pixel, while the image pixel change of the Linux graphics rendering system and the fluid cloud simulation rendering system fluctuates greatly, and there is no obvious trend change, which shows that the image pixel change of these two systems does not vary according to the image size, and at the same time, it indicates that the rendering effect of these two systems is extremely unstable. Compared with other systems, the rendering effect of this system is more stable, and the rendered poster image is more effective. (c) Images rendered by [6]. (d) Images rendered by [7].

Conclusions
In this paper, an artistic aided design method based on scene vision comprehension algorithm is proposed. e proposed approach can effectively extract the known scene image information, detect the salient region texture features of the collected color multitexture images by super-resolution fusion method, and identify the 2D texture features according to the texture and color feature components of the color multitexture image. e proposed approach is helpful to solve the problem of unclear output image, improve the output quality of the images, and improve the visual effect of artistic aided design. To verify the efficiency of this approach, a comparison is made with the images rendered in Linux graphics and fluid cloud rendering. Simulation results show that the proposed method is more accurate than the existing methods for extracting the information of known images, which helps to solve the problem of clearly visible output images and improves the overall design effect.

Data Availability
e data used to support the findings of this study are available from the corresponding author upon request.

Conflicts of Interest
e author declares that he has no conflicts of interest.