Image Processing Design and Algorithm Research Based on Cloud Computing

. Image processing technology is a popular practical technology in the computer ﬁ eld and has important research value for signal information processing. This article is aimed at studying the design and algorithm of image processing under cloud computing technology. This paper proposes cloud computing technology and image processing algorithms for image data processing. Among them, the material structure and performance of the system can choose a veri ﬁ cation algorithm to achieve the ﬁ nal operation. Moreover, let us start with the image editing features. This article isolates software and hardware that function rationally. On this basis, the structure of a real-time image processing system based on SOPC technology is built and the corresponding functional receiving unit is designed for real-time image storage, editing, and viewing. Studies have shown that the design of an image processing system based on cloud computing has increased the speed of image data processing by 14%. Compared with other algorithms, this image processing algorithm has great advantages in image compression and image restoration.


Introduction
By improving computer hardware technology, larger storage devices and faster processors will enable computers to process digital images more efficiently. However, conventional images based on image matrix representation are not highly efficient because they require a lot of unnecessary information and a lot of storage space. Considering the different types of unnecessary information in the image represented by the image matrix, researchers have proposed many imaging methods that can improve the performance. Although the reconstruction efficiency of these imaging methods is significantly improved compared to pixel matrix imaging, people are exploring ways to create deeper images. And the quick and easy downloads do not stop here. In recent years, real-time target detection has been widely used, and feature extraction is an indispensable part of target detection algorithms. At present, moment features are widely used in many aspects of image classification and recognition processing. If the image is moving, rotating, or scaling up or down in equal proportions. The computer system should display a fixed feature when recognizing these images, if images should remain unchanged. To improve the inspection efficiency of printed circuit board (PCB) solder joints, Wang et al. aim to detect PCB solder joints through image processing methods. Through a series of image processing algorithms, they completed the threshold segmentation and feature extraction of the solder joint image; then, the sphericity was determined according to the area and circumference, as well as the shape parameters and eccentricity of the calculated area, paving the way for the identification of defect patterns. However, there are some errors in the research process, leading to inaccurate results [1]. Hussain et al. offer a CMOS image sensor with a resolution of 200 × 200. The sensor is specially designed for applications where each pixel is used exclusively, measuring approximately 15 μm × 15 μm and image sensor chip size approximately 3:5 mm × 3:5 mm. The proposed sensor is simulated with a single-pixel input current variation of 2 pA to 100 pA and a corresponding measurement value of 2 mV to 855 mV per pixel. Moreover, they proposed a new method of pattern detection and recognition in the case of blood coverage, which can accurately segment the patterns in the blood. However, there are errors in image segmentation [2]. Venkatram and Geetha put forward the main purpose of big data which is to quickly view the cutting-edge and latest work being done in the field of big data analysis in different industries [3]. Since many academicians, researchers, and practitioners are very interested, it is rapidly updated and focuses on how to use existing technologies, frameworks, methods, and models to use big data analytics to take advantage of the value of big data analytics. However, the analysis process is very complicated. According to the current technical level and development trend of video image processing systems, this document carries out a great design and implementation of logic devices that will be implemented with large logic devices in the design. In particular the computing cloud in video image processing systems and analyzing infrared, reason for the image unevenness, the theory, and the method of infrared unevenness correction is studied, and a feasible image enhancement algorithm based on infrared image characteristics is proposed and realized through experiments.  [4]. In some large-scale projects, such as FIFA and League of Legends, large amounts of data will be stored on the cloud platform. Players only need to download the software and log in to the cloud platform to use it. This significantly reduces the need for computer equipment. [5]. The basic framework diagram of data storage and management technology is shown in Figure 1.

Virtualization Technology.
Virtualized focus of the service equipment Virtualized multi-individual visualization. This is the main content of the LAS precalculation [6,7]. This is the main goal: the main material of the virtual machine, the system of the operation system, the superlevel of the ear, the cutting high horizontal application program, the general physical equipment division, and the virtual machine [8]. The original operating system and applications were virtual, the machine form was run at a virtual level, and many virtual machines could be executed on natural machines. Virtual multimachine can be applied to different operating systems in the enterprise, such as management systems and operating systems [9].

Graphics
Processing Algorithms for Cloud Computing.
The computer system includes computational and detection targets. Depending on the hardware structure and system performance, a proven algorithm can be selected to perform the final action [10].
In the cloud processing system, the working environment is more complicated. For the processing of the original image, processing steps such as noise, interference, image clarity, and image improvement are required [11]. According to the current research status domestically and internationally, the commonly used image smoothing methods are the average sector method, intermediate filtering method, filtering method, selective masking, media collection filtering, and other methods [12].
The average vector method is a spatial processing method that uses the average of pixel gray values instead of pixel gray values. The types of smoothed images are To control the image without blur, the threshold method can be used to reduce the blurring effect caused by the neighborhood average [13].
2.2.1. Spatial Low-Pass Filtering Algorithm. We know that the slow part of the signal belongs to the low-frequency part of the frequency part, and the fast part of the signal belongs to the high-frequency part of the frequency part [14]. The spatial frequency of the image and the interference frequency of the edge are higher. Therefore, low-pass filtering can be used to remove noise, while frequency-domain filtering can be easily achieved by spatial rotation. Therefore, as long as the impulse response matrix of the spatial system is designed reasonably, the noise can be filtered [15]. The basic flowchart of the system using low-pass filtering algorithm to remove noise is shown in Figure 2.
If there is a two-dimensional function FðA, BÞ, input the filter system and the output signal is recorded as GðA, BÞ. Suppose the impulse response function of the filter system is DðA, BÞ; then, there is When the input is a distinct image Q, the output is a distinct image P × P and the pulse response function is an order of L × L to avoid duplication. L ≤ P − Q + 1 should be satisfied. The discrete form of the filtering sector is Because noise is not spatially irrelevant in the image, the noise is higher than the spatial frequency spectrum of the general composition, and low-pass filtering can be used to remove the noise in the image. The following are the 2 Journal of Sensors different forms of the low-pass spatial response function in the case of L = 3, represented by matrix K: It can be seen that using the second filter, the result is similar to the result achieved by the simple neighborhood average method under a 3 × 3 window.

Median Filtering Algorithm.
In an image contaminated by noise, if linear filtering is used in the processing, most of the linear filtering flow is relatively small, and the image edge is blurred while removing the noise. Under certain conditions, the average filtering method can get better results in removing noise and protecting image edges. In other words, this is a nonlinear image emphasis technology, which has an excellent suppression effect on interference pulses and speckle noise and can more appropriately maintain the edges of the image [16].
The operation process of median filtering is as follows: Here is a combination X 1 , X 2 , ⋯, X n ; arrange the magnitudes of the n numbers as follows:

Journal of Sensors
The median of the category is represented by y. For example, the sequence is ð70, 80, 190, 100, 110Þ, and the median value of this sequence is 100.
Suppose the input sequence is fX i , i ∈ Ig, and the subset of natural numbers is represented by I, where the length of the window is n. Next, the output filter is represented as The filter can be represented in the form of a twodimensional window. Let fX ij g represent the gray value of each point of a digital image. The two-dimensional central filter of filter window A can be represented as The various elements of the image layout are individual values called pixels. In digital image processing, the general table N and gray level G have the integral power of 2, namely, N = 2^n and G = 2^m. For TV images in general laboratories, N is 256 or 512 and grayscale G is 64-256, which can meet the needs of image processing. For images with special requirements such as satellite images, take 2340 × 3240 and the grayscale m is 8-12 bits.
Let the number b be the number of bits required to store the digital image, and there are where K is the relational expression of the gray level number G = 2 k . When M = N, the above formula becomes The chain code represents a binary image composed of straight lines and curves, which is very suitable for describing the boundaries of images. Using chain code ratio matrix expressions can save a lot of bits.

Edge Detection
Algorithm. The edges of the image are usually related to the continuation of the image intensity or the first derivative of the image intensity. The continuation of the image intensity can be divided into the following: (1) Step discontinuity, that is, the gray value of the pixels on both sides of the discontinuity has a significant difference (2) The line is not continuous; that is, the image intensity suddenly changes from one value to another and then returns to the original value after maintaining a small stroke Edge detection is the most basic function for detecting important local changes in an image. In one direction, the end of the step is related to the local top of the dominant function of the image. The slope is a measure of the change of the function, and the image can be regarded as a series of sampling points by continuously operating image intensity. Therefore, the situation of the same dimension is similar, and discrete hierarchical approximation functions can be used to detect large changes in image gray values.
Two important properties are related to gradient: (1) The direction of vector gði, jÞ is the direction of the maximum rate of change when the function increases (2) The magnitude of the gradient is given by the following formula: Journal of Sensors In practical applications, the absolute value is usually used to approximate the gradient amplitude: According to vector analysis, the slowly changing vector is The angle α is the angle relative to the x-axis.

Image Processing System Design Experiment
3.1. Experimental Parameter Design. In this experiment, MATLAB is used for modeling, and then, the sample data of this article is imported. The compressed sensing sparsity is 1000; that is, after the original image is wavelet transformed, the wavelet coefficients are sorted, and then, 1000 large coefficients are retained and reset the remaining coefficients to zero. Observe the sparse wavelet coefficients of the observation matrix. The size of the observation matrix is 4116 × 16424, and then, the observation results are transmitted to SOPC for reconstruction. This experiment was conducted to determine whether the SOPC system regenerated by OMP was functioning normally. Therefore, the wavelet coefficients after zeroing are used to observe the original image instead of the original image [17]. In this experiment, the wavelet coefficients after zeroing are equal to the original image and the reconstructed wavelet coefficients are equal to the reconstructed image.

Image Processing Programmable System Design
(1) Design input: there are many ways to introduce design. At present, the two most commonly used are circuit diagrams and material description languages. For simple drawings, you can use charts or ABEL language design. For complex designs, schematic diagrams or material description languages or a mixture of the two can be used, and hierarchical design methods can be used to describe units and hierarchical structures. When the software design and input check for grammatical errors, the software will create a list of grammatical errors for the design and input (2) Design realization: design realization refers to the drawing process from design input files to bitstream files. In this process, the training software automatically compiles and optimizes the design files and performs mapping, placement, and routing of selected devices and creates the corresponding bitstream data files (3) Device configuration: FPGA device configuration modes fall into two categories: active configuration features and passive configuration features. Active configuration mode is a configuration operation program guided by GAL devices that control the external storage and preparation process. Passive configuration is a controlled synthesis process (4) Design verification: this is consistent with the design verification process including functional simulation,   5 Journal of Sensors timing simulation, and equipment testing. Functional simulation verified the design logic. In the process of introducing the design, part of the operation or the entire design can be simulated. Timing simulation is a delay simulation of device layout and operation after the design is implemented, and the timing relationship is analyzed. Equipment testing is to use test tools to test the final function and performance indicators of the equipment after the experiment or programming, as shown in Figure 3

Image Processing Algorithms Based on Cloud Computing
This section analyzes the performance of the compression algorithm, the complexity of the processing process, and the image reconstruction of compressed sensing.

Image Coding Compression Performance Test.
To test the performance index of the algorithm, this paper selects the real scene geometric regular image collected by the web camera as the original image of compression coding. The image resolution is in pixels. Figure 4 is an image effect frame encoding with different quantization factors in a compression encoder. The difference in image quality after compression coding can be instinctively compared through human vision [18,19]. The experiment compares the processing effect map from the file size reduction ratio, peak signal-tonoise ratio, time complexity, visual effect, and other aspects. The running results of the reduction algorithm with different quantization coefficients are shown in Table 1. From the data in Table 1, it can be seen that different choices of system parameter settings will have a greater impact on the image compression effect. The larger the quantization coefficient, the smaller the amount of compressed image data, the larger the image compression ratio, and the smaller the peak signal-to-noise ratio of the image. At the same time, the compression time of the algorithm is also less, and the visual effect of the image can be seen in obvious blocking effects. The complexity of the algorithm is Oðn log 2nÞ. The real object taken by the image cannot be distinguished. To observe the comparison of data more intuitively, draw the table into a picture, as shown in Figure 4.
It can be seen from the experimental data that the system quantization coefficient is set to 15, and the visual effect of the compressed image is obvious. At this time, the image has no obvious distortion, and the difference from the original image is small. At the same time, the image compression, peak signal-to-noise ratio, compression time, and other parameters are compared. So it can be ensure the compression ratio under the condition of get a better image quality.   Journal of Sensors 4.2. Image Processing Algorithm Analysis. The fuzzy image is restored based on the super Laplace prior model. From the above analysis, it can be seen that the regularization term index a and the regularization parameter b of the algorithm have a great impact on the restoration quality of the image and the execution time of the algorithm. In this paper, based on SVPSF, the image formed by the single-lens imaging system can be restored in blocks through the super Laplace prior algorithm. By taking the values of the parameters a and b selected by Dilip Krishnan's experiment as 0.5 and 256, respectively, the block restoration is based on the SVPSF image.

The Influence of Parameter on the Image Restoration
Algorithm. The image is restored by accurately establishing the model through the super Laplacian operator; usually, the range of a is 0.5-0.8 and the index has a great influence on the restoration effect. Different intervals correspond to different algorithm models. When a = 1, it is the Laplace restoration model, which does not fit the heavy-tailed distribution of the image very well. When a = 2, it is a Gaussian distribution model, and the fitting effect is very different. When a is between 0 and 1, it is a super Laplace model, and when a is between 0.5 and 0.8, the restoration effect is better. Therefore, it is necessary to analyze the value of parameter a and restore the restored image with different values of a parameter to obtain different SSIM. The experimental data is shown in Table 2.
According to the analysis of the experimental data in the table, the restored image and SSIM change with the value of parameter a. SSIM increase monotonously in the range of 0.4-0.55. When a = 0:6, the SSIM value is the largest, and the similarity is increased by 1.5% compared to that before optimization; it decreases monotonously in the range of 0.6-0.9. It can be seen from Figure 5 that the parameter a had influence on the restored image SSIM.

The Influence of Parameter b on the Image Restoration
Algorithm. It is solved by the semiquadratic penalty method, and the variable w is introduced while giving the blurred image x. b is the weight of a regularization process change, and its value increases monotonously from bð0Þ to bðincÞ to bðmaxÞ; as b changes, the number of iterations of graph restoration also changes. At the same time, the number of iterations is closely related to the running time of the restoration algorithm and the restoration effect, so this article analyzes the parameter b.
From the data analysis in Table 3 and Figure 6, it can be seen that under the condition that parameter a does not change, the SSIM of the restored image changes with the change of parameter b.
From the data in the table, it can be known that as the parameter b gradually increases, the SSIM of the restored image shows a trend of first increasing and then decreasing.   When b is between 200 and 600, the value of the restored image SSIM gradually increases, and there is a maximum value of 0.87. When b is between 1600 and 4600, the SSIM value of the restored image gradually decreases, and the SSIM value after restoration is smaller than that before optimization.

Image Reconstruction Analysis of Compressed Sensing.
This article introduces the SOPC implementation of compressed sensing based on the OMP reconstruction of Cholesky matrix decomposition. The following is an analysis of the experimental results of SOPC. The results of the analysis are shown in Table 4.
From the comparison of the data in the table, it can be seen that the PSNR of the SOPC reconstructed image is not high. Analysis shows that three reasons affect the PSNR: (1) All data in this SOPC system are represented by fixed-point numbers, so the accuracy of the algorithm is affected to a certain extent, so the PNSR of the reconstructed image is not high (2) In this system, LFSR is used to generate a random observation matrix. Since the random number generated by LFSR is not completely random, it affects the incoherence of the observation matrix to a certain extent. Reconstructed PSNR is affected    (3) Before observing the wavelet coefficients, the small coefficients in the wavelet coefficients are reset to zero, and then, some small details are lost, so PNSR will be affected To more intuitively observe the high and low changes of the data, draw the table into a graph, as shown in Figure 7.
According to the data analysis in the figure, it can be concluded that among the 5 images, the house image has the highest PSNR. The analysis shows that the house image is relatively regular, and the coefficients obtained after the wavelet transform are relatively sparse, and the wavelet coefficients are reset to the minimum of the house image, so the PSNR after reconstruction is the maximum.
In this experiment, the size of the observation matrix is still 4100 × 16400, but the sparsity is increased from 500 to 1400, with an interval of 10 each time. Five images were reconstructed, and the relationship between the sparsity and PSNR obtained is shown in Table 5 and Figure 8.

4.4.
Discussion. This paper builds a wavelet transform model under Quartus II. Compared with the model in the reference, the simulation parameter in this paper has a better effect in the range of 0.5-0.8. In the literature, the value of a is between 0.5 and 2 due to the difference in the model. This is because the model in this paper optimizes parameters such as image compression, peak signal-to-noise ratio, and compression time to reduce interference, so the value range is concentrated, which facilitates the control of the model and does not cause model distortion. In addition, the model in this paper can process images with a signal-to-noise ratio between 500 and 1400, while other methods have smaller signal-to-noise ratio intervals. Therefore, the method in this paper can process images with a large signal-to-noise ratio range to a small value and has a high degree of recovery.

Conclusions
The hardware implementation scheme of the image processing algorithm is proposed. By comparing the PC implementation of the image processing system and the dedicated digital signal processor (DSP) implementation, the structure of the cloud computing-based on-chip programmable system is constructed, and the various parts of image acquisition, storage, and real-time display of each part of image processing are carried out, and the overall structure design is improved. The structure design has been improved.
The cloud computing application introduced in this article is an important cloud imaging system project. Different choices of system parameter settings will have a greater impact on the image compression effect. The larger the quantization coefficient, the smaller the amount of compressed image data, the larger the image compression rate, and the smaller the peak signal-to-noise ratio of the image. At the same time, the compression time of the algorithm is less, and the visual effect of the image can be seen in the obvious occlusion effect. It is impossible to distinguish the real objects captured in the image.
Because the image data itself contains a large amount of information, the realization of image processing algorithms puts forward higher requirements on hardware devices. With the development of embedded system technology, the functions of embedded microprocessors are becoming increasingly powerful. The combination of style and image processing will also become a complex system project.

Data Availability
The data underlying the results presented in the study are available within the manuscript.

Conflicts of Interest
The authors declare that they have no conflicts of interest.