Synchronized Information Acquisition Method for Virtual Geographic Scene Image Synthesis in Cities Based on Wireless Network Technology

,


Introduction
Urban virtual geographic scene image synthesis is an important technology in computer vision applications, and the quality of its synthesis results will directly affect the effect of urban virtual scene construction. In the construction of urban virtual geographic scenes, it is necessary to combine the synthesis information with the geographic scene information acquired by the camera for parameter resolution and other operations, in order to improve the authenticity of the virtual geographic scene. For virtual geographic scene image synthesis, the most important thing is the efficiency and accuracy of synthesis information acquisition [1,2]. Current research results related to the synchronous acquisition of image synthesis information are mainly applied to medical images, and the synchronous acquisition of medical imaging synthesis information is achieved by establishing techniques such as ultrahigh field, weighted imaging, and time encoding or by combining RF communication technology [3][4][5][6][7]. These methods have a high synchronization rate and relatively high information acquisition accuracy, but when applied to virtual geographic scene image synthesis information acquisition, it is difficult to achieve a high information acquisition rate and accuracy due to the large amount of information to be acquired and the data being complicated, and there are limitations in the use. Synthetic information acquisition using NFC technology and wireless mobile networks requires a certain amount of manual effort to assist in achieving synthetic information acquisition [8]. This leads to the high cost of this method and is not suitable for wide-scale use. Synthetic information acquisition using synthetic adversarial networks requires the establishment of synthetic adversarial networks, which has the problem of large delays [9][10][11].
Wireless sensor network is an infrastructure-free selforganized, multihop wireless network, which can sense, collect, and process various monitoring object information in real time and has wide application prospects in the military, industrial automation, intelligent transportation, environmental monitoring, etc. It is also one of the international research hotspots that have received much attention. Image synthesis requires sensors to monitor a large number of geographical parameters to enrich the virtual scene detail information and improve the quality of synthesized images. The use of wireless sensor networks in wireless network technology can effectively enhance the data transmission and processing speed and reduce the acquisition delay. Based on the above analysis background, this paper will study the method of synchronous acquisition of image synthesis information of urban virtual geographic scenes based on wireless network technology.

Research on the Method of Synchronous
Information Acquisition of Urban Virtual Geographic Scene Image Synthesis Based on Wireless Network Technology 2.1. Geographical Scene Image Synthesis Sign Target Spatial Localization. Target location in a virtual geographic environment is the key to realize the combination of reality and virtual. Before collecting the image synthesis information of an urban virtual geographic scene, this paper first constructs the urban virtual geographic environment and then locates the sign target of geographic scene image synthesis. Therefore, in order to improve the accuracy of scene image synthesis information acquisition, it is necessary to locate the target synthesis symbol of geographic scene image. When taking urban geographical images, the internal orientation element in photogrammetry is the parameter to determine the relevant position between the camera lens center and the image, namely, the vertical distance between the camera center, S, and the image, f , and the position coordinate ðx 0 , y 0 Þ of the image main point (the vertical foot of the main optical axis on the image surface) relative to the image center. The external orientation element is the position and attitude of the photography center, including six parameters, among which three straight elements ðX s , Y s , Z s Þ are used to describe the coordinate value of the photography center in the spatial coordinate system, and the other three angle elements ðα, β, γÞ are used to determine the spatial attitude of the photography beam at the moment of photography [12]. According to the schematic diagram of the OpenGL projection principle shown in Figure 1, the relationship between internal and external azimuth elements and the transformation matrix of the OpenGL imaging process can be determined.
When internal azimuth elements such as width and height ðL x , L y Þ, focal length f , and coordinate of main point ðx 0 , y 0 Þ of the photographic plate (CCD) are determined, the internal orientation elements are set as parameters to the function parameters of OpenGL's imaging function glFrustum using the isometric relationship to simulate the real camera imaging process, which corresponds to the following [13]: Substitute the above equation into the projection transformation formula of OpenGL, and the following projection matrix can be obtained. : ð2Þ In the above formula, Z f and Z n , respectively, determine the near and far cutting surfaces of the projection cone. In OpenGL, model transformation and view transformation have duality, so the model transformation matrix can also be realized by glLookAt(). Assuming that the camera position parameter in the external orientation element is ðX s , Y s , Z s Þ and the angle element of the space attitude is ðφ, ω, kÞ, the external parameter matrix can be calculated. According to the quantitative relationship between OpenGL perspective imaging view functional parameters and internal and external orientation elements, the projection matrix and model view required for camera imaging of urban virtual 2 Wireless Communications and Mobile Computing geographic scene image are calculated through camera position, attitude, and internal parameters. According to the inverse process of OpenGL urban virtual geographic scene imaging, the imaging projection ray of the target in three-dimensional space is calculated, and the intersection of the ray and the three-dimensional scene is obtained, that is, the actual coordinates of the synthetic recognition target in the real world, where the calculation formula of pixel coordinates of the monitoring target imaging on the simulation image is as follows [14]: In the above formula, winX and winY are screen coordinates, w r and w r are the width and height of the actual image, w v and h v are the width and height of the virtual simulation image, and u and v are the pixel coordinates of the target on the actual image. Through the screen coordinates of the target in the simulation image, the intersection points of its projection rays on the near and far cropping surfaces in the projection cone can be obtained, and the corresponding coordinates are ðwinX, winY, 0Þ and ðwinX, winY, 1Þ, respectively. According to the inverse process of OpenGL imaging, through screen coordinates and the corresponding depth value winZ, the real world coordinates in threedimensional space can be calculated by pressing the following formula: In the above equation, M is the transformation relation matrix between the virtual scene and the view in OpenGL and V is the OpenGL viewport transformation matrix. Finally, the imaging projection ray of the target in 3D space can be obtained in the virtual geography scene. The ray is intersected with the digital surface model in the virtual scene. The intersection position is the spatial coordinate of the target located on the surface of the earth. After locating the spatial coordinates of the composite mark target of the geographic scene image, the scene image is registered.

Image Registration Processing of Virtual Geographic
Scene. On the basis of recording urban virtual geographic scene images, in order to retain more original urban geographic scene information and improve the accuracy of the image information synthesis, this study uses an optical flow method to register urban geographic scene images. Let the three input images with different exposure levels be I 1 , I 2 , and I 3 , and their exposure times are t 1 , t 2 , and t 3 , respectively. The optical flow method detects the brightness when the constant flow of the motion is the most accurate, so in order to accurately calculate the optical flow between I 1 and I 2 , I 1 's first exposure must be consistent with I 2 ; at the same time, transfer I 2 's exposure to be consistent with I 3 , then for subsequent optical flow computing, this process can be expressed as [15] where Δ 1,2 is the exposure ratio between I 1 and I 2, ; Δ 2,3 is the exposure ratio between I 2 and I 3 ; I 1,2 is the image that matches the exposure value of image I 2 after exposure correction of image I 1 ; I 2,3 is the image that matches the exposure value of image I 3 after exposure correction of image I 2 ; γ ′ is the gamma coefficient, with a value of 2.2; and function clip is a clipping function whose purpose is to ensure that the pixel value of the image is within the range [0,1]. The calculation formula of exposure ratio Δ is as follows: In order to obtain a consistent structure of the image magnitude sequence of the urban geographic scene, it is necessary to find the optical flow from I 2 to I 1,2 and from I 2 to I 2,3 . Firstly, the optical flow from I 2 to I 1,2 is calculated assuming that the coordinate system in which I 2 is located is Ω = fz k : z k ∈ R 2 g N k=1 , and z k is a two-dimensional coordinate in the coordinate system. In the optical flow method, the mapping relationship between images is represented by the error between feature points, and the objective function can be expressed as [16] E where fw k g is the optical flow vector from I 2 to I 1,2 at each pixel position and defines w N+1 = w 1 . h k is the weight parameter that regulates the distance between points z k and z k+1 , N k is a square neighborhood of point z k , a is the regularization factor, and r k is the weighting function for each pixel point. The defined equation is as follows [17]: where 1½q + z k ∈ Ω means that the value is 1 if q + z k is in the range Ω and 0 otherwise. σ r is the pixel point pixel variance; q is the pixel value of the image pixel point.
Assuming that there is a preliminary estimate for fw k g, the feature points can be adjusted to obtain the optimal value. In this study, the coarse-to-fine search method is adopted to update the optical flow quantity iteratively.

Wireless Communications and Mobile Computing
Taking the linear minimization of the above function as the optimization objective, the conjugate gradient method is used to solve the optimization objective function, and the corresponding optical flow estimation from I 2 to I 1,2 is obtained. In the same way, the optical flow estimation from I 2 to I 2,3 is obtained. After the optical flow between the images is obtained, images I 1,2 ′ and I 2,3 ′ that are consistent with the I 2 structure can be obtained by conversion. Finally, the bicubic interpolation method is used to adjust the pixel points in the images to complete the registration processing of the urban geographical scene images [18]. In order to realize the synchronous acquisition of synthetic information of city virtual geographic scene, the wireless sensor network synchronous acquisition route is designed.

Build Wireless Network to Realize Image Synthesis
Information Acquisition. The acquisition and transmission of image synthesis information of urban virtual geographic scenes mainly rely on multiple sensors and wireless communication networks to be realized, while the computational capacity, storage capacity, and communication capacity of nodes in wireless sensor networks and the energy carried are very limited, so the routing algorithm of wireless sensor networks is particularly important. Based on the geographic location information of the image synthesis markers positioned above, a beacon-based greedy algorithm is used to design the synchronous wireless network routing for image synthesis of urban virtual scenes.
When building a wireless network to obtain image synthesis information, the target area of urban scene image synthesis information covered by the wireless network is divided into equal length grid sets, and the division results are stored in the database table. The specific division steps are as follows [19]: (1) Determine the parameters for dividing the grid.
Specify the grid distribution range, including the minimum latitude and longitude and maximum latitude and longitude of the wireless network coverage area. The coverage area is divided into square cells with specified grid side lengths, and the grid cell side length is set to 30-50 m according to the actual urban geographic scenario According to the actual demand of urban virtual geographic scene establishment, a certain number of nodes are selected from the divided wireless network to build the source nodes for image synthesis information acquisition and transmission. Hardware devices such as cameras and sensors are placed at the image synthesis identification points to collect multiple geographic information in real time. The control and transmission of synchronous information data acquisition is realized through the wireless network source nodes. Influence the design of wireless network communication routing in order to reduce the delay of information acquisition.
As shown in Figure 2, source nodes s, s1, and s2 send packets to target nodes D, M, and N. When the source node s sends a packet to the target node D, the packet is first sent to the local smallest node from s in the greedy algorithm mode, and then, it enters the surrounding algorithm mode to the beacon node P, and then, it recovers from B to the greedy mode and sends to D. If the source node knows the beacon cache hB, Pi, the next time when point S sends data to the target node M and N, it directly takes a route through point B instead of passing through point P, so that the number of routing hops can be saved. Point S will send the information of beacon cache hB, Pi to s1 and s2, then s1 and s2 will also directly bypass point B when sending messages to D, M, and N, which can reduce the return length of the path when bypassing the void and save the number of routing hops for them to bypass the void [20].
Based on Figure 2, when sending data packets to the target node of urban virtual geographic scene image synthesis, first, query the target node to see whether it is located in the shadow area of the target node characterized by local beacon cache. If there is a shadow area of the target node, the indirect target node is directly extracted from the beacon cache, and the grouping mode is set to data greedy mode. The data packet is sent directly to the indirect target node, that is, the beacon node. After reaching the indirect target  Wireless Communications and Mobile Computing node, query the target node to see if it is located in one of the shadow areas of the target node characterized by the beacon cache of the node. If so, continue to the indirect destination address. If it does not exist, the urban virtual geographic scene image synthesis packet is sent directly to the target address. If the target node receives the urban virtual geographic scene image synthesis packet with this mode, the whole data transmission process is completed here. If the target node receives a packet sent in the mode of DATA|FLD, the target node also needs to send back the beacon information found during the beacon discovery process to the source node and the nodes in the shadow area of the source node. When no route hole is found in the whole routing process, the sending process is all done by greedy algorithm and the size of beacon cache hB, Pi in the packet is zero, then the target node has no beacon node information to be returned and the data sending process is completed. According to the above designed wireless network communication route transmission of each hardware device to obtain synthetic information. So far, under the transmission communication network architecture built by wireless network technology, the synchronous acquisition of synthetic information of urban virtual geographic scene images is realized by the above process.

Experimental Study
The above paper addresses the problems of the current geographic scene image synthesis information acquisition method and uses wireless network technology to build a communication network in order to improve the acquisition speed of image synthesis information. In this section, the experimental study on the method of synchronous acquisition of image synthesis information of urban virtual geographic scenes based on the wireless network technology proposed above will be conducted by setting up the corresponding experimental environment with reference to the actual situation. The following contents are the specific experimental scheme design and the experimental result analysis.

Experimental Protocol Design.
In order to verify the application performance of this method in the synchronous acquisition of the image synthesis information of an urban virtual geographic scene, experimental analysis is carried out. The experimental tool is matlab2016, the size of image sample is 120, the training set is 50, the scale of image segmentation is 12, the gray coefficient of the image is 0.34, the similarity coefficient is 0.38, and the scale of wavelet decomposition is 18. The synchronous collection of image synthesis information of the urban virtual geographic scene is set according to the above parameters. In this experiment, the NFC-based information synchronization acquisition method and the time-coding-based information synchronization acquisition method are selected as the comparison method 1 and the comparison method 2, respectively, with the acquisition methods studied above. The practical application of the three methods is evaluated by comparing the latency of the images captured by the three methods and the quality of the images synthesized by applying the captured synthetic information.

Experimental Data and
Analysis. Using three synthetic information acquisition methods to acquire the same amount of data for image synthetic information, the average latency comparison of the data acquired at each acquisition point is shown in Table 1.
Analyzing the data in Table 1, we can see that the acquisition delay of both comparison methods grows more obviously as the amount of synthetic information data increases, and the larger the amount of data increases, the faster the growth rate of the acquisition delay of both comparison methods tends to accelerate. Although the acquisition delay of this paper shows a small fluctuation, the overall value is less than 0.5 s, which can meet the demand of synthetic information acquisition of a large range of geographical scene images. The smaller the synthetic information acquisition delay, the higher the synchronization rate of the acquisition method and the better the performance. The reason for the small delay of this method is that before collecting the image synthesis information of urban virtual geographic scene, the urban landmark building target is located first, which improves the collection efficiency.
The image quality comparison after applying the synthesized information collected by the three methods for virtual city scene image synthesis is shown in Table 2, and the four indexes of the average gradient, spatial frequency, information entropy, and peak signal-to-noise ratio (PSNR) of the image are selected for image quality evaluation in this experiment.
By analyzing the data in Table 2, it can be seen that the synthesized images collected by the three methods all meet the lowest standard, and the generated images have no obvious distortion compared with the true value. However, compared with the data collected by the method in this paper, the value of the image quality evaluation index for the synthesized images is more superior. From the value of the peak signal-to-noise ratio (PSNR), the image synthesized by using the information collected by the method in this paper has a larger value in both the case of hue mapping and the case of linearization, and the synthesis effect is better. The peak signal-to-noise ratio of this method is 47.5427. The main reason is that this method constructs the network topology, realizes full coverage, and improves the image acquisition quality in the process of synchronous acquisition of image synthesis information of urban virtual geographic scene.
The above experimental data analysis shows that the urban virtual geographic scene image synthesis information synchronous acquisition method studied in this paper based on wireless network technology has a low acquisition delay, a higher synchronization rate, and a better application effect compared with other methods.

Conclusion
Due to the massive, multisource, and heterogeneous characteristics of urban geospatial data, real-time rendering of large-scale urban 3D scene has always been the difficulty of 3D geographic information technology. In order to establish a highly realistic and reliable virtual scene for urban planning, this paper studies the synchronous acquisition method of image synthesis information of virtual urban geographical environment based on wireless network technology, verifies the feasibility of this method through experiments, and verifies that this method greatly improves the quality of composite images. Firstly, locate the target space of geographical environment image synthesis signature, then register the obtained virtual geographical environment image, and, finally, build a wireless network to realize the synchronous collection of image synthesis information. In this paper, there is still room to improve the synchronization speed of the network. We can try to improve the performance of the network from the perspective of hardware optimization to further reduce the acquisition delay.

Data Availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.